00:00:00.000 Started by upstream project "autotest-per-patch" build number 131947 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.025 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.539 The recommended git tool is: git 00:00:00.540 using credential 00000000-0000-0000-0000-000000000002 00:00:00.542 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.555 Fetching changes from the remote Git repository 00:00:00.559 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.574 Using shallow fetch with depth 1 00:00:00.574 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.574 > git --version # timeout=10 00:00:00.588 > git --version # 'git version 2.39.2' 00:00:00.588 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.601 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.601 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.074 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.085 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.099 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.099 > git config core.sparsecheckout # timeout=10 00:00:06.110 > git read-tree -mu HEAD # timeout=10 00:00:06.130 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.152 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.152 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.237 [Pipeline] Start of Pipeline 00:00:06.249 [Pipeline] library 00:00:06.250 Loading library shm_lib@master 00:00:06.250 Library shm_lib@master is cached. Copying from home. 00:00:06.261 [Pipeline] node 00:00:06.268 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:06.270 [Pipeline] { 00:00:06.278 [Pipeline] catchError 00:00:06.279 [Pipeline] { 00:00:06.289 [Pipeline] wrap 00:00:06.296 [Pipeline] { 00:00:06.302 [Pipeline] stage 00:00:06.303 [Pipeline] { (Prologue) 00:00:06.318 [Pipeline] echo 00:00:06.319 Node: VM-host-SM17 00:00:06.324 [Pipeline] cleanWs 00:00:06.333 [WS-CLEANUP] Deleting project workspace... 00:00:06.333 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.339 [WS-CLEANUP] done 00:00:06.559 [Pipeline] setCustomBuildProperty 00:00:06.672 [Pipeline] httpRequest 00:00:07.016 [Pipeline] echo 00:00:07.017 Sorcerer 10.211.164.101 is alive 00:00:07.023 [Pipeline] retry 00:00:07.024 [Pipeline] { 00:00:07.033 [Pipeline] httpRequest 00:00:07.037 HttpMethod: GET 00:00:07.037 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.038 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.042 Response Code: HTTP/1.1 200 OK 00:00:07.042 Success: Status code 200 is in the accepted range: 200,404 00:00:07.043 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:27.013 [Pipeline] } 00:00:27.030 [Pipeline] // retry 00:00:27.038 [Pipeline] sh 00:00:27.319 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:27.334 [Pipeline] httpRequest 00:00:27.717 [Pipeline] echo 00:00:27.718 Sorcerer 10.211.164.101 is alive 00:00:27.728 [Pipeline] retry 00:00:27.730 [Pipeline] { 00:00:27.743 [Pipeline] httpRequest 00:00:27.747 HttpMethod: GET 00:00:27.747 URL: http://10.211.164.101/packages/spdk_504f4c967947ef310db3981aa847167a983286fb.tar.gz 00:00:27.748 Sending request to url: http://10.211.164.101/packages/spdk_504f4c967947ef310db3981aa847167a983286fb.tar.gz 00:00:27.752 Response Code: HTTP/1.1 200 OK 00:00:27.752 Success: Status code 200 is in the accepted range: 200,404 00:00:27.753 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_504f4c967947ef310db3981aa847167a983286fb.tar.gz 00:04:18.860 [Pipeline] } 00:04:18.877 [Pipeline] // retry 00:04:18.884 [Pipeline] sh 00:04:19.161 + tar --no-same-owner -xf spdk_504f4c967947ef310db3981aa847167a983286fb.tar.gz 00:04:22.501 [Pipeline] sh 00:04:22.780 + git -C spdk log --oneline -n5 00:04:22.781 504f4c967 nvmf: rename passthrough_nsid -> passthru_nsid 00:04:22.781 e9d2fb879 nvmf: use bdev's nsid for admin command passthru 00:04:22.781 568b24fde nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:04:22.781 d631ca103 bdev: add spdk_bdev_get_nvme_nsid() 00:04:22.781 12fc2abf1 test: Remove autopackage.sh 00:04:22.798 [Pipeline] writeFile 00:04:22.812 [Pipeline] sh 00:04:23.091 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:23.102 [Pipeline] sh 00:04:23.380 + cat autorun-spdk.conf 00:04:23.380 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:23.380 SPDK_RUN_ASAN=1 00:04:23.380 SPDK_RUN_UBSAN=1 00:04:23.380 SPDK_TEST_RAID=1 00:04:23.380 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:23.387 RUN_NIGHTLY=0 00:04:23.388 [Pipeline] } 00:04:23.404 [Pipeline] // stage 00:04:23.420 [Pipeline] stage 00:04:23.422 [Pipeline] { (Run VM) 00:04:23.436 [Pipeline] sh 00:04:23.715 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:23.715 + echo 'Start stage prepare_nvme.sh' 00:04:23.715 Start stage prepare_nvme.sh 00:04:23.715 + [[ -n 4 ]] 00:04:23.715 + disk_prefix=ex4 00:04:23.715 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:04:23.715 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:04:23.715 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:04:23.715 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:23.715 ++ SPDK_RUN_ASAN=1 00:04:23.715 ++ SPDK_RUN_UBSAN=1 00:04:23.715 ++ SPDK_TEST_RAID=1 00:04:23.715 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:23.715 ++ RUN_NIGHTLY=0 00:04:23.715 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:04:23.715 + nvme_files=() 00:04:23.715 + declare -A nvme_files 00:04:23.715 + backend_dir=/var/lib/libvirt/images/backends 00:04:23.715 + nvme_files['nvme.img']=5G 00:04:23.715 + nvme_files['nvme-cmb.img']=5G 00:04:23.715 + nvme_files['nvme-multi0.img']=4G 00:04:23.716 + nvme_files['nvme-multi1.img']=4G 00:04:23.716 + nvme_files['nvme-multi2.img']=4G 00:04:23.716 + nvme_files['nvme-openstack.img']=8G 00:04:23.716 + nvme_files['nvme-zns.img']=5G 00:04:23.716 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:23.716 + (( SPDK_TEST_FTL == 1 )) 00:04:23.716 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:23.716 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:23.716 + for nvme in "${!nvme_files[@]}" 00:04:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:04:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:23.716 + for nvme in "${!nvme_files[@]}" 00:04:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:04:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:23.716 + for nvme in "${!nvme_files[@]}" 00:04:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:04:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:23.716 + for nvme in "${!nvme_files[@]}" 00:04:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:04:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:23.716 + for nvme in "${!nvme_files[@]}" 00:04:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:04:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:23.716 + for nvme in "${!nvme_files[@]}" 00:04:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:04:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:23.716 + for nvme in "${!nvme_files[@]}" 00:04:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:04:23.974 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:23.974 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:04:23.974 + echo 'End stage prepare_nvme.sh' 00:04:23.974 End stage prepare_nvme.sh 00:04:23.995 [Pipeline] sh 00:04:24.274 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:24.274 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:04:24.274 00:04:24.274 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:04:24.274 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:04:24.274 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:04:24.274 HELP=0 00:04:24.274 DRY_RUN=0 00:04:24.274 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:04:24.274 NVME_DISKS_TYPE=nvme,nvme, 00:04:24.274 NVME_AUTO_CREATE=0 00:04:24.274 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:04:24.274 NVME_CMB=,, 00:04:24.274 NVME_PMR=,, 00:04:24.274 NVME_ZNS=,, 00:04:24.274 NVME_MS=,, 00:04:24.274 NVME_FDP=,, 00:04:24.274 SPDK_VAGRANT_DISTRO=fedora39 00:04:24.274 SPDK_VAGRANT_VMCPU=10 00:04:24.274 SPDK_VAGRANT_VMRAM=12288 00:04:24.274 SPDK_VAGRANT_PROVIDER=libvirt 00:04:24.274 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:24.274 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:24.274 SPDK_OPENSTACK_NETWORK=0 00:04:24.274 VAGRANT_PACKAGE_BOX=0 00:04:24.274 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:04:24.274 FORCE_DISTRO=true 00:04:24.274 VAGRANT_BOX_VERSION= 00:04:24.274 EXTRA_VAGRANTFILES= 00:04:24.274 NIC_MODEL=e1000 00:04:24.274 00:04:24.275 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:04:24.275 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:04:27.562 Bringing machine 'default' up with 'libvirt' provider... 00:04:28.130 ==> default: Creating image (snapshot of base box volume). 00:04:28.130 ==> default: Creating domain with the following settings... 00:04:28.130 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730284369_087e20e774da13144df9 00:04:28.130 ==> default: -- Domain type: kvm 00:04:28.130 ==> default: -- Cpus: 10 00:04:28.130 ==> default: -- Feature: acpi 00:04:28.130 ==> default: -- Feature: apic 00:04:28.130 ==> default: -- Feature: pae 00:04:28.130 ==> default: -- Memory: 12288M 00:04:28.130 ==> default: -- Memory Backing: hugepages: 00:04:28.130 ==> default: -- Management MAC: 00:04:28.130 ==> default: -- Loader: 00:04:28.130 ==> default: -- Nvram: 00:04:28.130 ==> default: -- Base box: spdk/fedora39 00:04:28.130 ==> default: -- Storage pool: default 00:04:28.130 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730284369_087e20e774da13144df9.img (20G) 00:04:28.130 ==> default: -- Volume Cache: default 00:04:28.130 ==> default: -- Kernel: 00:04:28.130 ==> default: -- Initrd: 00:04:28.130 ==> default: -- Graphics Type: vnc 00:04:28.130 ==> default: -- Graphics Port: -1 00:04:28.130 ==> default: -- Graphics IP: 127.0.0.1 00:04:28.130 ==> default: -- Graphics Password: Not defined 00:04:28.130 ==> default: -- Video Type: cirrus 00:04:28.130 ==> default: -- Video VRAM: 9216 00:04:28.130 ==> default: -- Sound Type: 00:04:28.130 ==> default: -- Keymap: en-us 00:04:28.130 ==> default: -- TPM Path: 00:04:28.130 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:28.130 ==> default: -- Command line args: 00:04:28.130 ==> default: -> value=-device, 00:04:28.130 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:28.130 ==> default: -> value=-drive, 00:04:28.130 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:04:28.130 ==> default: -> value=-device, 00:04:28.130 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:28.130 ==> default: -> value=-device, 00:04:28.130 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:28.130 ==> default: -> value=-drive, 00:04:28.130 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:28.130 ==> default: -> value=-device, 00:04:28.130 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:28.130 ==> default: -> value=-drive, 00:04:28.130 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:28.130 ==> default: -> value=-device, 00:04:28.130 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:28.130 ==> default: -> value=-drive, 00:04:28.130 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:28.130 ==> default: -> value=-device, 00:04:28.130 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:28.388 ==> default: Creating shared folders metadata... 00:04:28.388 ==> default: Starting domain. 00:04:29.763 ==> default: Waiting for domain to get an IP address... 00:04:47.847 ==> default: Waiting for SSH to become available... 00:04:47.847 ==> default: Configuring and enabling network interfaces... 00:04:51.143 default: SSH address: 192.168.121.61:22 00:04:51.143 default: SSH username: vagrant 00:04:51.143 default: SSH auth method: private key 00:04:53.714 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:01.844 ==> default: Mounting SSHFS shared folder... 00:05:02.788 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:02.788 ==> default: Checking Mount.. 00:05:04.161 ==> default: Folder Successfully Mounted! 00:05:04.161 ==> default: Running provisioner: file... 00:05:04.727 default: ~/.gitconfig => .gitconfig 00:05:05.293 00:05:05.293 SUCCESS! 00:05:05.293 00:05:05.293 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:05.293 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:05.293 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:05.293 00:05:05.301 [Pipeline] } 00:05:05.316 [Pipeline] // stage 00:05:05.326 [Pipeline] dir 00:05:05.327 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:05:05.329 [Pipeline] { 00:05:05.341 [Pipeline] catchError 00:05:05.343 [Pipeline] { 00:05:05.356 [Pipeline] sh 00:05:05.634 + vagrant ssh-config --host vagrant 00:05:05.634 + sed -ne /^Host/,$p 00:05:05.634 + tee ssh_conf 00:05:08.976 Host vagrant 00:05:08.976 HostName 192.168.121.61 00:05:08.976 User vagrant 00:05:08.976 Port 22 00:05:08.976 UserKnownHostsFile /dev/null 00:05:08.976 StrictHostKeyChecking no 00:05:08.976 PasswordAuthentication no 00:05:08.976 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:08.976 IdentitiesOnly yes 00:05:08.976 LogLevel FATAL 00:05:08.976 ForwardAgent yes 00:05:08.976 ForwardX11 yes 00:05:08.976 00:05:08.988 [Pipeline] withEnv 00:05:08.990 [Pipeline] { 00:05:09.003 [Pipeline] sh 00:05:09.281 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:09.281 source /etc/os-release 00:05:09.281 [[ -e /image.version ]] && img=$(< /image.version) 00:05:09.281 # Minimal, systemd-like check. 00:05:09.281 if [[ -e /.dockerenv ]]; then 00:05:09.281 # Clear garbage from the node's name: 00:05:09.281 # agt-er_autotest_547-896 -> autotest_547-896 00:05:09.281 # $HOSTNAME is the actual container id 00:05:09.281 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:09.281 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:09.281 # We can assume this is a mount from a host where container is running, 00:05:09.281 # so fetch its hostname to easily identify the target swarm worker. 00:05:09.281 container="$(< /etc/hostname) ($agent)" 00:05:09.281 else 00:05:09.281 # Fallback 00:05:09.281 container=$agent 00:05:09.281 fi 00:05:09.281 fi 00:05:09.281 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:09.281 00:05:09.549 [Pipeline] } 00:05:09.567 [Pipeline] // withEnv 00:05:09.576 [Pipeline] setCustomBuildProperty 00:05:09.591 [Pipeline] stage 00:05:09.593 [Pipeline] { (Tests) 00:05:09.610 [Pipeline] sh 00:05:09.896 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:10.171 [Pipeline] sh 00:05:10.452 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:10.725 [Pipeline] timeout 00:05:10.726 Timeout set to expire in 1 hr 30 min 00:05:10.728 [Pipeline] { 00:05:10.744 [Pipeline] sh 00:05:11.026 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:11.628 HEAD is now at 504f4c967 nvmf: rename passthrough_nsid -> passthru_nsid 00:05:11.640 [Pipeline] sh 00:05:11.921 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:12.193 [Pipeline] sh 00:05:12.474 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:12.749 [Pipeline] sh 00:05:13.027 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:05:13.286 ++ readlink -f spdk_repo 00:05:13.286 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:13.286 + [[ -n /home/vagrant/spdk_repo ]] 00:05:13.286 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:13.286 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:13.286 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:13.286 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:13.286 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:13.286 + [[ raid-vg-autotest == pkgdep-* ]] 00:05:13.286 + cd /home/vagrant/spdk_repo 00:05:13.286 + source /etc/os-release 00:05:13.286 ++ NAME='Fedora Linux' 00:05:13.286 ++ VERSION='39 (Cloud Edition)' 00:05:13.286 ++ ID=fedora 00:05:13.286 ++ VERSION_ID=39 00:05:13.286 ++ VERSION_CODENAME= 00:05:13.286 ++ PLATFORM_ID=platform:f39 00:05:13.286 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:13.286 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:13.286 ++ LOGO=fedora-logo-icon 00:05:13.286 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:13.286 ++ HOME_URL=https://fedoraproject.org/ 00:05:13.286 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:13.286 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:13.286 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:13.286 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:13.286 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:13.286 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:13.286 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:13.286 ++ SUPPORT_END=2024-11-12 00:05:13.286 ++ VARIANT='Cloud Edition' 00:05:13.286 ++ VARIANT_ID=cloud 00:05:13.286 + uname -a 00:05:13.286 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:13.286 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:13.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.544 Hugepages 00:05:13.544 node hugesize free / total 00:05:13.544 node0 1048576kB 0 / 0 00:05:13.544 node0 2048kB 0 / 0 00:05:13.544 00:05:13.544 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:13.802 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:13.802 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:13.802 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:13.802 + rm -f /tmp/spdk-ld-path 00:05:13.802 + source autorun-spdk.conf 00:05:13.802 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:13.802 ++ SPDK_RUN_ASAN=1 00:05:13.802 ++ SPDK_RUN_UBSAN=1 00:05:13.802 ++ SPDK_TEST_RAID=1 00:05:13.802 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:13.802 ++ RUN_NIGHTLY=0 00:05:13.802 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:13.802 + [[ -n '' ]] 00:05:13.802 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:13.802 + for M in /var/spdk/build-*-manifest.txt 00:05:13.802 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:13.802 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:13.802 + for M in /var/spdk/build-*-manifest.txt 00:05:13.802 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:13.802 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:13.802 + for M in /var/spdk/build-*-manifest.txt 00:05:13.802 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:13.802 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:13.802 ++ uname 00:05:13.802 + [[ Linux == \L\i\n\u\x ]] 00:05:13.802 + sudo dmesg -T 00:05:13.802 + sudo dmesg --clear 00:05:13.802 + dmesg_pid=5200 00:05:13.802 + sudo dmesg -Tw 00:05:13.802 + [[ Fedora Linux == FreeBSD ]] 00:05:13.802 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:13.802 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:13.802 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:13.802 + [[ -x /usr/src/fio-static/fio ]] 00:05:13.802 + export FIO_BIN=/usr/src/fio-static/fio 00:05:13.802 + FIO_BIN=/usr/src/fio-static/fio 00:05:13.802 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:13.802 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:13.802 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:13.802 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:13.802 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:13.802 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:13.802 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:13.802 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:13.802 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:13.802 10:33:35 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:13.802 10:33:35 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:13.802 10:33:35 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:13.802 10:33:35 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:05:13.802 10:33:35 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:05:13.802 10:33:35 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:05:13.802 10:33:35 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:13.802 10:33:35 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:05:13.802 10:33:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:13.802 10:33:35 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:14.061 10:33:35 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:14.061 10:33:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.061 10:33:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:14.061 10:33:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:14.061 10:33:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.061 10:33:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.061 10:33:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.061 10:33:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.061 10:33:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.061 10:33:35 -- paths/export.sh@5 -- $ export PATH 00:05:14.061 10:33:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.061 10:33:35 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:14.061 10:33:35 -- common/autobuild_common.sh@486 -- $ date +%s 00:05:14.061 10:33:35 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730284415.XXXXXX 00:05:14.061 10:33:35 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730284415.1uE6KB 00:05:14.061 10:33:35 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:05:14.061 10:33:35 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:05:14.061 10:33:35 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:14.061 10:33:35 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:14.061 10:33:35 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:14.061 10:33:35 -- common/autobuild_common.sh@502 -- $ get_config_params 00:05:14.061 10:33:35 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:14.061 10:33:35 -- common/autotest_common.sh@10 -- $ set +x 00:05:14.061 10:33:35 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:05:14.061 10:33:35 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:05:14.061 10:33:35 -- pm/common@17 -- $ local monitor 00:05:14.061 10:33:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:14.061 10:33:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:14.061 10:33:35 -- pm/common@25 -- $ sleep 1 00:05:14.061 10:33:35 -- pm/common@21 -- $ date +%s 00:05:14.061 10:33:35 -- pm/common@21 -- $ date +%s 00:05:14.061 10:33:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730284415 00:05:14.061 10:33:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730284415 00:05:14.061 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730284415_collect-vmstat.pm.log 00:05:14.061 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730284415_collect-cpu-load.pm.log 00:05:15.022 10:33:36 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:15.022 10:33:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:15.022 10:33:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:15.022 10:33:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:15.022 10:33:36 -- spdk/autobuild.sh@16 -- $ date -u 00:05:15.022 Wed Oct 30 10:33:36 AM UTC 2024 00:05:15.022 10:33:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:15.022 v25.01-pre-127-g504f4c967 00:05:15.022 10:33:36 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:15.022 10:33:36 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:15.022 10:33:36 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:15.022 10:33:36 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:15.022 10:33:36 -- common/autotest_common.sh@10 -- $ set +x 00:05:15.022 ************************************ 00:05:15.022 START TEST asan 00:05:15.022 ************************************ 00:05:15.022 using asan 00:05:15.022 10:33:36 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:05:15.022 00:05:15.022 real 0m0.000s 00:05:15.022 user 0m0.000s 00:05:15.022 sys 0m0.000s 00:05:15.022 10:33:36 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:15.022 10:33:36 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:15.022 ************************************ 00:05:15.022 END TEST asan 00:05:15.022 ************************************ 00:05:15.022 10:33:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:15.022 10:33:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:15.022 10:33:36 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:15.022 10:33:36 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:15.022 10:33:36 -- common/autotest_common.sh@10 -- $ set +x 00:05:15.022 ************************************ 00:05:15.022 START TEST ubsan 00:05:15.022 ************************************ 00:05:15.022 using ubsan 00:05:15.022 10:33:36 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:05:15.022 00:05:15.023 real 0m0.000s 00:05:15.023 user 0m0.000s 00:05:15.023 sys 0m0.000s 00:05:15.023 10:33:36 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:15.023 ************************************ 00:05:15.023 END TEST ubsan 00:05:15.023 10:33:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:15.023 ************************************ 00:05:15.023 10:33:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:15.023 10:33:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:15.023 10:33:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:15.023 10:33:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:15.023 10:33:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:15.023 10:33:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:15.023 10:33:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:15.023 10:33:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:15.023 10:33:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:05:15.282 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:15.282 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:15.540 Using 'verbs' RDMA provider 00:05:29.118 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:43.992 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:43.992 Creating mk/config.mk...done. 00:05:43.992 Creating mk/cc.flags.mk...done. 00:05:43.992 Type 'make' to build. 00:05:43.992 10:34:04 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:43.992 10:34:04 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:43.992 10:34:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:43.992 10:34:04 -- common/autotest_common.sh@10 -- $ set +x 00:05:43.992 ************************************ 00:05:43.992 START TEST make 00:05:43.992 ************************************ 00:05:43.992 10:34:04 make -- common/autotest_common.sh@1127 -- $ make -j10 00:05:43.992 make[1]: Nothing to be done for 'all'. 00:06:02.072 The Meson build system 00:06:02.072 Version: 1.5.0 00:06:02.072 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:02.072 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:02.072 Build type: native build 00:06:02.072 Program cat found: YES (/usr/bin/cat) 00:06:02.072 Project name: DPDK 00:06:02.072 Project version: 24.03.0 00:06:02.072 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:02.072 C linker for the host machine: cc ld.bfd 2.40-14 00:06:02.072 Host machine cpu family: x86_64 00:06:02.072 Host machine cpu: x86_64 00:06:02.072 Message: ## Building in Developer Mode ## 00:06:02.072 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:02.072 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:02.072 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:02.072 Program python3 found: YES (/usr/bin/python3) 00:06:02.072 Program cat found: YES (/usr/bin/cat) 00:06:02.072 Compiler for C supports arguments -march=native: YES 00:06:02.072 Checking for size of "void *" : 8 00:06:02.072 Checking for size of "void *" : 8 (cached) 00:06:02.072 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:02.072 Library m found: YES 00:06:02.072 Library numa found: YES 00:06:02.072 Has header "numaif.h" : YES 00:06:02.072 Library fdt found: NO 00:06:02.072 Library execinfo found: NO 00:06:02.072 Has header "execinfo.h" : YES 00:06:02.072 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:02.072 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:02.072 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:02.072 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:02.072 Run-time dependency openssl found: YES 3.1.1 00:06:02.072 Run-time dependency libpcap found: YES 1.10.4 00:06:02.072 Has header "pcap.h" with dependency libpcap: YES 00:06:02.072 Compiler for C supports arguments -Wcast-qual: YES 00:06:02.072 Compiler for C supports arguments -Wdeprecated: YES 00:06:02.072 Compiler for C supports arguments -Wformat: YES 00:06:02.072 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:02.072 Compiler for C supports arguments -Wformat-security: NO 00:06:02.072 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:02.072 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:02.072 Compiler for C supports arguments -Wnested-externs: YES 00:06:02.072 Compiler for C supports arguments -Wold-style-definition: YES 00:06:02.072 Compiler for C supports arguments -Wpointer-arith: YES 00:06:02.072 Compiler for C supports arguments -Wsign-compare: YES 00:06:02.072 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:02.072 Compiler for C supports arguments -Wundef: YES 00:06:02.072 Compiler for C supports arguments -Wwrite-strings: YES 00:06:02.072 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:02.072 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:02.072 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:02.072 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:02.072 Program objdump found: YES (/usr/bin/objdump) 00:06:02.072 Compiler for C supports arguments -mavx512f: YES 00:06:02.072 Checking if "AVX512 checking" compiles: YES 00:06:02.072 Fetching value of define "__SSE4_2__" : 1 00:06:02.072 Fetching value of define "__AES__" : 1 00:06:02.072 Fetching value of define "__AVX__" : 1 00:06:02.072 Fetching value of define "__AVX2__" : 1 00:06:02.072 Fetching value of define "__AVX512BW__" : (undefined) 00:06:02.072 Fetching value of define "__AVX512CD__" : (undefined) 00:06:02.072 Fetching value of define "__AVX512DQ__" : (undefined) 00:06:02.072 Fetching value of define "__AVX512F__" : (undefined) 00:06:02.072 Fetching value of define "__AVX512VL__" : (undefined) 00:06:02.072 Fetching value of define "__PCLMUL__" : 1 00:06:02.072 Fetching value of define "__RDRND__" : 1 00:06:02.072 Fetching value of define "__RDSEED__" : 1 00:06:02.072 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:02.072 Fetching value of define "__znver1__" : (undefined) 00:06:02.072 Fetching value of define "__znver2__" : (undefined) 00:06:02.072 Fetching value of define "__znver3__" : (undefined) 00:06:02.072 Fetching value of define "__znver4__" : (undefined) 00:06:02.072 Library asan found: YES 00:06:02.072 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:02.072 Message: lib/log: Defining dependency "log" 00:06:02.072 Message: lib/kvargs: Defining dependency "kvargs" 00:06:02.072 Message: lib/telemetry: Defining dependency "telemetry" 00:06:02.072 Library rt found: YES 00:06:02.072 Checking for function "getentropy" : NO 00:06:02.072 Message: lib/eal: Defining dependency "eal" 00:06:02.072 Message: lib/ring: Defining dependency "ring" 00:06:02.072 Message: lib/rcu: Defining dependency "rcu" 00:06:02.072 Message: lib/mempool: Defining dependency "mempool" 00:06:02.072 Message: lib/mbuf: Defining dependency "mbuf" 00:06:02.072 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:02.072 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:06:02.072 Compiler for C supports arguments -mpclmul: YES 00:06:02.072 Compiler for C supports arguments -maes: YES 00:06:02.072 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:02.072 Compiler for C supports arguments -mavx512bw: YES 00:06:02.072 Compiler for C supports arguments -mavx512dq: YES 00:06:02.072 Compiler for C supports arguments -mavx512vl: YES 00:06:02.072 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:02.072 Compiler for C supports arguments -mavx2: YES 00:06:02.072 Compiler for C supports arguments -mavx: YES 00:06:02.072 Message: lib/net: Defining dependency "net" 00:06:02.072 Message: lib/meter: Defining dependency "meter" 00:06:02.072 Message: lib/ethdev: Defining dependency "ethdev" 00:06:02.072 Message: lib/pci: Defining dependency "pci" 00:06:02.072 Message: lib/cmdline: Defining dependency "cmdline" 00:06:02.072 Message: lib/hash: Defining dependency "hash" 00:06:02.072 Message: lib/timer: Defining dependency "timer" 00:06:02.072 Message: lib/compressdev: Defining dependency "compressdev" 00:06:02.072 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:02.072 Message: lib/dmadev: Defining dependency "dmadev" 00:06:02.072 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:02.072 Message: lib/power: Defining dependency "power" 00:06:02.072 Message: lib/reorder: Defining dependency "reorder" 00:06:02.072 Message: lib/security: Defining dependency "security" 00:06:02.072 Has header "linux/userfaultfd.h" : YES 00:06:02.072 Has header "linux/vduse.h" : YES 00:06:02.072 Message: lib/vhost: Defining dependency "vhost" 00:06:02.072 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:02.072 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:02.072 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:02.072 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:02.072 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:02.072 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:02.072 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:02.072 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:02.072 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:02.072 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:02.072 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:02.072 Configuring doxy-api-html.conf using configuration 00:06:02.072 Configuring doxy-api-man.conf using configuration 00:06:02.072 Program mandb found: YES (/usr/bin/mandb) 00:06:02.072 Program sphinx-build found: NO 00:06:02.072 Configuring rte_build_config.h using configuration 00:06:02.072 Message: 00:06:02.073 ================= 00:06:02.073 Applications Enabled 00:06:02.073 ================= 00:06:02.073 00:06:02.073 apps: 00:06:02.073 00:06:02.073 00:06:02.073 Message: 00:06:02.073 ================= 00:06:02.073 Libraries Enabled 00:06:02.073 ================= 00:06:02.073 00:06:02.073 libs: 00:06:02.073 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:02.073 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:02.073 cryptodev, dmadev, power, reorder, security, vhost, 00:06:02.073 00:06:02.073 Message: 00:06:02.073 =============== 00:06:02.073 Drivers Enabled 00:06:02.073 =============== 00:06:02.073 00:06:02.073 common: 00:06:02.073 00:06:02.073 bus: 00:06:02.073 pci, vdev, 00:06:02.073 mempool: 00:06:02.073 ring, 00:06:02.073 dma: 00:06:02.073 00:06:02.073 net: 00:06:02.073 00:06:02.073 crypto: 00:06:02.073 00:06:02.073 compress: 00:06:02.073 00:06:02.073 vdpa: 00:06:02.073 00:06:02.073 00:06:02.073 Message: 00:06:02.073 ================= 00:06:02.073 Content Skipped 00:06:02.073 ================= 00:06:02.073 00:06:02.073 apps: 00:06:02.073 dumpcap: explicitly disabled via build config 00:06:02.073 graph: explicitly disabled via build config 00:06:02.073 pdump: explicitly disabled via build config 00:06:02.073 proc-info: explicitly disabled via build config 00:06:02.073 test-acl: explicitly disabled via build config 00:06:02.073 test-bbdev: explicitly disabled via build config 00:06:02.073 test-cmdline: explicitly disabled via build config 00:06:02.073 test-compress-perf: explicitly disabled via build config 00:06:02.073 test-crypto-perf: explicitly disabled via build config 00:06:02.073 test-dma-perf: explicitly disabled via build config 00:06:02.073 test-eventdev: explicitly disabled via build config 00:06:02.073 test-fib: explicitly disabled via build config 00:06:02.073 test-flow-perf: explicitly disabled via build config 00:06:02.073 test-gpudev: explicitly disabled via build config 00:06:02.073 test-mldev: explicitly disabled via build config 00:06:02.073 test-pipeline: explicitly disabled via build config 00:06:02.073 test-pmd: explicitly disabled via build config 00:06:02.073 test-regex: explicitly disabled via build config 00:06:02.073 test-sad: explicitly disabled via build config 00:06:02.073 test-security-perf: explicitly disabled via build config 00:06:02.073 00:06:02.073 libs: 00:06:02.073 argparse: explicitly disabled via build config 00:06:02.073 metrics: explicitly disabled via build config 00:06:02.073 acl: explicitly disabled via build config 00:06:02.073 bbdev: explicitly disabled via build config 00:06:02.073 bitratestats: explicitly disabled via build config 00:06:02.073 bpf: explicitly disabled via build config 00:06:02.073 cfgfile: explicitly disabled via build config 00:06:02.073 distributor: explicitly disabled via build config 00:06:02.073 efd: explicitly disabled via build config 00:06:02.073 eventdev: explicitly disabled via build config 00:06:02.073 dispatcher: explicitly disabled via build config 00:06:02.073 gpudev: explicitly disabled via build config 00:06:02.073 gro: explicitly disabled via build config 00:06:02.073 gso: explicitly disabled via build config 00:06:02.073 ip_frag: explicitly disabled via build config 00:06:02.073 jobstats: explicitly disabled via build config 00:06:02.073 latencystats: explicitly disabled via build config 00:06:02.073 lpm: explicitly disabled via build config 00:06:02.073 member: explicitly disabled via build config 00:06:02.073 pcapng: explicitly disabled via build config 00:06:02.073 rawdev: explicitly disabled via build config 00:06:02.073 regexdev: explicitly disabled via build config 00:06:02.073 mldev: explicitly disabled via build config 00:06:02.073 rib: explicitly disabled via build config 00:06:02.073 sched: explicitly disabled via build config 00:06:02.073 stack: explicitly disabled via build config 00:06:02.073 ipsec: explicitly disabled via build config 00:06:02.073 pdcp: explicitly disabled via build config 00:06:02.073 fib: explicitly disabled via build config 00:06:02.073 port: explicitly disabled via build config 00:06:02.073 pdump: explicitly disabled via build config 00:06:02.073 table: explicitly disabled via build config 00:06:02.073 pipeline: explicitly disabled via build config 00:06:02.073 graph: explicitly disabled via build config 00:06:02.073 node: explicitly disabled via build config 00:06:02.073 00:06:02.073 drivers: 00:06:02.073 common/cpt: not in enabled drivers build config 00:06:02.073 common/dpaax: not in enabled drivers build config 00:06:02.073 common/iavf: not in enabled drivers build config 00:06:02.073 common/idpf: not in enabled drivers build config 00:06:02.073 common/ionic: not in enabled drivers build config 00:06:02.073 common/mvep: not in enabled drivers build config 00:06:02.073 common/octeontx: not in enabled drivers build config 00:06:02.073 bus/auxiliary: not in enabled drivers build config 00:06:02.073 bus/cdx: not in enabled drivers build config 00:06:02.073 bus/dpaa: not in enabled drivers build config 00:06:02.073 bus/fslmc: not in enabled drivers build config 00:06:02.073 bus/ifpga: not in enabled drivers build config 00:06:02.073 bus/platform: not in enabled drivers build config 00:06:02.073 bus/uacce: not in enabled drivers build config 00:06:02.073 bus/vmbus: not in enabled drivers build config 00:06:02.073 common/cnxk: not in enabled drivers build config 00:06:02.073 common/mlx5: not in enabled drivers build config 00:06:02.073 common/nfp: not in enabled drivers build config 00:06:02.073 common/nitrox: not in enabled drivers build config 00:06:02.073 common/qat: not in enabled drivers build config 00:06:02.073 common/sfc_efx: not in enabled drivers build config 00:06:02.073 mempool/bucket: not in enabled drivers build config 00:06:02.073 mempool/cnxk: not in enabled drivers build config 00:06:02.073 mempool/dpaa: not in enabled drivers build config 00:06:02.073 mempool/dpaa2: not in enabled drivers build config 00:06:02.073 mempool/octeontx: not in enabled drivers build config 00:06:02.073 mempool/stack: not in enabled drivers build config 00:06:02.073 dma/cnxk: not in enabled drivers build config 00:06:02.073 dma/dpaa: not in enabled drivers build config 00:06:02.073 dma/dpaa2: not in enabled drivers build config 00:06:02.073 dma/hisilicon: not in enabled drivers build config 00:06:02.073 dma/idxd: not in enabled drivers build config 00:06:02.073 dma/ioat: not in enabled drivers build config 00:06:02.073 dma/skeleton: not in enabled drivers build config 00:06:02.073 net/af_packet: not in enabled drivers build config 00:06:02.073 net/af_xdp: not in enabled drivers build config 00:06:02.073 net/ark: not in enabled drivers build config 00:06:02.073 net/atlantic: not in enabled drivers build config 00:06:02.073 net/avp: not in enabled drivers build config 00:06:02.073 net/axgbe: not in enabled drivers build config 00:06:02.073 net/bnx2x: not in enabled drivers build config 00:06:02.073 net/bnxt: not in enabled drivers build config 00:06:02.073 net/bonding: not in enabled drivers build config 00:06:02.073 net/cnxk: not in enabled drivers build config 00:06:02.073 net/cpfl: not in enabled drivers build config 00:06:02.073 net/cxgbe: not in enabled drivers build config 00:06:02.073 net/dpaa: not in enabled drivers build config 00:06:02.073 net/dpaa2: not in enabled drivers build config 00:06:02.073 net/e1000: not in enabled drivers build config 00:06:02.073 net/ena: not in enabled drivers build config 00:06:02.073 net/enetc: not in enabled drivers build config 00:06:02.073 net/enetfec: not in enabled drivers build config 00:06:02.073 net/enic: not in enabled drivers build config 00:06:02.073 net/failsafe: not in enabled drivers build config 00:06:02.073 net/fm10k: not in enabled drivers build config 00:06:02.073 net/gve: not in enabled drivers build config 00:06:02.073 net/hinic: not in enabled drivers build config 00:06:02.073 net/hns3: not in enabled drivers build config 00:06:02.073 net/i40e: not in enabled drivers build config 00:06:02.073 net/iavf: not in enabled drivers build config 00:06:02.073 net/ice: not in enabled drivers build config 00:06:02.073 net/idpf: not in enabled drivers build config 00:06:02.073 net/igc: not in enabled drivers build config 00:06:02.073 net/ionic: not in enabled drivers build config 00:06:02.073 net/ipn3ke: not in enabled drivers build config 00:06:02.073 net/ixgbe: not in enabled drivers build config 00:06:02.073 net/mana: not in enabled drivers build config 00:06:02.073 net/memif: not in enabled drivers build config 00:06:02.073 net/mlx4: not in enabled drivers build config 00:06:02.073 net/mlx5: not in enabled drivers build config 00:06:02.073 net/mvneta: not in enabled drivers build config 00:06:02.073 net/mvpp2: not in enabled drivers build config 00:06:02.073 net/netvsc: not in enabled drivers build config 00:06:02.073 net/nfb: not in enabled drivers build config 00:06:02.073 net/nfp: not in enabled drivers build config 00:06:02.073 net/ngbe: not in enabled drivers build config 00:06:02.073 net/null: not in enabled drivers build config 00:06:02.073 net/octeontx: not in enabled drivers build config 00:06:02.073 net/octeon_ep: not in enabled drivers build config 00:06:02.073 net/pcap: not in enabled drivers build config 00:06:02.073 net/pfe: not in enabled drivers build config 00:06:02.073 net/qede: not in enabled drivers build config 00:06:02.073 net/ring: not in enabled drivers build config 00:06:02.073 net/sfc: not in enabled drivers build config 00:06:02.073 net/softnic: not in enabled drivers build config 00:06:02.073 net/tap: not in enabled drivers build config 00:06:02.073 net/thunderx: not in enabled drivers build config 00:06:02.073 net/txgbe: not in enabled drivers build config 00:06:02.073 net/vdev_netvsc: not in enabled drivers build config 00:06:02.073 net/vhost: not in enabled drivers build config 00:06:02.073 net/virtio: not in enabled drivers build config 00:06:02.073 net/vmxnet3: not in enabled drivers build config 00:06:02.073 raw/*: missing internal dependency, "rawdev" 00:06:02.073 crypto/armv8: not in enabled drivers build config 00:06:02.073 crypto/bcmfs: not in enabled drivers build config 00:06:02.073 crypto/caam_jr: not in enabled drivers build config 00:06:02.073 crypto/ccp: not in enabled drivers build config 00:06:02.073 crypto/cnxk: not in enabled drivers build config 00:06:02.073 crypto/dpaa_sec: not in enabled drivers build config 00:06:02.073 crypto/dpaa2_sec: not in enabled drivers build config 00:06:02.073 crypto/ipsec_mb: not in enabled drivers build config 00:06:02.073 crypto/mlx5: not in enabled drivers build config 00:06:02.073 crypto/mvsam: not in enabled drivers build config 00:06:02.073 crypto/nitrox: not in enabled drivers build config 00:06:02.073 crypto/null: not in enabled drivers build config 00:06:02.073 crypto/octeontx: not in enabled drivers build config 00:06:02.073 crypto/openssl: not in enabled drivers build config 00:06:02.073 crypto/scheduler: not in enabled drivers build config 00:06:02.073 crypto/uadk: not in enabled drivers build config 00:06:02.073 crypto/virtio: not in enabled drivers build config 00:06:02.073 compress/isal: not in enabled drivers build config 00:06:02.073 compress/mlx5: not in enabled drivers build config 00:06:02.073 compress/nitrox: not in enabled drivers build config 00:06:02.073 compress/octeontx: not in enabled drivers build config 00:06:02.073 compress/zlib: not in enabled drivers build config 00:06:02.073 regex/*: missing internal dependency, "regexdev" 00:06:02.073 ml/*: missing internal dependency, "mldev" 00:06:02.073 vdpa/ifc: not in enabled drivers build config 00:06:02.073 vdpa/mlx5: not in enabled drivers build config 00:06:02.073 vdpa/nfp: not in enabled drivers build config 00:06:02.073 vdpa/sfc: not in enabled drivers build config 00:06:02.073 event/*: missing internal dependency, "eventdev" 00:06:02.073 baseband/*: missing internal dependency, "bbdev" 00:06:02.073 gpu/*: missing internal dependency, "gpudev" 00:06:02.073 00:06:02.073 00:06:02.073 Build targets in project: 85 00:06:02.073 00:06:02.073 DPDK 24.03.0 00:06:02.073 00:06:02.073 User defined options 00:06:02.073 buildtype : debug 00:06:02.073 default_library : shared 00:06:02.073 libdir : lib 00:06:02.073 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:02.073 b_sanitize : address 00:06:02.073 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:02.073 c_link_args : 00:06:02.073 cpu_instruction_set: native 00:06:02.073 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:02.073 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:02.073 enable_docs : false 00:06:02.073 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:06:02.073 enable_kmods : false 00:06:02.073 max_lcores : 128 00:06:02.073 tests : false 00:06:02.073 00:06:02.073 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:02.073 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:02.073 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:02.073 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:02.073 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:02.073 [4/268] Linking static target lib/librte_kvargs.a 00:06:02.073 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:02.073 [6/268] Linking static target lib/librte_log.a 00:06:02.641 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.641 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:02.901 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:03.160 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:03.160 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:03.160 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:03.160 [13/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.160 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:03.160 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:03.418 [16/268] Linking target lib/librte_log.so.24.1 00:06:03.418 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:03.418 [18/268] Linking static target lib/librte_telemetry.a 00:06:03.418 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:03.418 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:03.677 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:03.677 [22/268] Linking target lib/librte_kvargs.so.24.1 00:06:03.936 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:03.936 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:04.194 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:04.194 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:04.451 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:04.451 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:04.451 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.709 [30/268] Linking target lib/librte_telemetry.so.24.1 00:06:04.709 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:04.968 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:04.968 [33/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:04.968 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:05.226 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:05.226 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:05.485 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:05.485 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:05.485 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:05.485 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:05.485 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:05.485 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:05.485 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:05.485 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:05.782 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:06.040 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:06.298 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:06.298 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:06.555 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:06.555 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:06.555 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:06.812 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:06.812 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:06.812 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:06.812 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:07.071 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:07.071 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:07.329 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:07.329 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:07.587 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:07.587 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:07.587 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:07.587 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:07.587 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:07.587 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:07.846 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:07.846 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:07.846 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:08.106 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:08.364 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:08.364 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:08.364 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:08.364 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:08.622 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:08.622 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:08.622 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:08.622 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:08.880 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:08.880 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:08.880 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:08.880 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:09.138 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:09.707 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:09.707 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:09.707 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:09.707 [86/268] Linking static target lib/librte_ring.a 00:06:09.707 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:09.970 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:09.970 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:09.970 [90/268] Linking static target lib/librte_rcu.a 00:06:09.970 [91/268] Linking static target lib/librte_eal.a 00:06:09.970 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:09.970 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:10.245 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:10.245 [95/268] Linking static target lib/librte_mempool.a 00:06:10.245 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.504 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:10.504 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:10.504 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:10.504 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:10.504 [101/268] Linking static target lib/librte_mbuf.a 00:06:10.762 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:10.762 [103/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:11.020 [104/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:11.020 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:11.278 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:11.278 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:11.536 [108/268] Linking static target lib/librte_meter.a 00:06:11.536 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:11.536 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:11.536 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:11.794 [112/268] Linking static target lib/librte_net.a 00:06:11.795 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.795 [114/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.795 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:12.054 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.054 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:12.312 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:12.312 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.571 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:12.829 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:12.829 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:13.395 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:13.395 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:13.395 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:13.395 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:13.395 [127/268] Linking static target lib/librte_pci.a 00:06:13.722 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:13.722 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:13.722 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:13.993 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:13.993 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:13.993 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.993 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:13.993 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:14.252 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:14.252 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:14.252 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:14.252 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:14.252 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:14.510 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:14.510 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:14.510 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:14.510 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:15.076 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:15.076 [146/268] Linking static target lib/librte_cmdline.a 00:06:15.076 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:15.076 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:15.076 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:15.076 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:15.334 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:15.593 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:15.593 [153/268] Linking static target lib/librte_timer.a 00:06:15.850 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:15.850 [155/268] Linking static target lib/librte_hash.a 00:06:16.109 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:16.109 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:16.109 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:16.367 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:16.367 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:16.626 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.626 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:16.626 [163/268] Linking static target lib/librte_ethdev.a 00:06:16.885 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:16.885 [165/268] Linking static target lib/librte_compressdev.a 00:06:16.885 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.143 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:17.143 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:17.143 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:17.143 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:17.401 [171/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.401 [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:17.401 [173/268] Linking static target lib/librte_dmadev.a 00:06:17.401 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:17.659 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:17.917 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.917 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:17.917 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:18.176 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:18.176 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:18.176 [181/268] Linking static target lib/librte_cryptodev.a 00:06:18.433 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:18.433 [183/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:18.433 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:19.091 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:19.091 [186/268] Linking static target lib/librte_power.a 00:06:19.091 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:19.349 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:19.349 [189/268] Linking static target lib/librte_security.a 00:06:19.349 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:19.607 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:19.607 [192/268] Linking static target lib/librte_reorder.a 00:06:19.864 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:20.123 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:20.123 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.390 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.648 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:20.648 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:20.906 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:21.164 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:21.164 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:21.164 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.422 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:21.423 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:21.423 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:21.680 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:22.247 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:22.247 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:22.247 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:22.247 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:22.247 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:22.504 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:22.504 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:22.504 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:22.504 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:22.504 [216/268] Linking static target drivers/librte_bus_vdev.a 00:06:22.763 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:22.763 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:22.763 [219/268] Linking static target drivers/librte_bus_pci.a 00:06:22.763 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:22.763 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:23.023 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:23.023 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.281 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:23.281 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:23.281 [226/268] Linking static target drivers/librte_mempool_ring.a 00:06:23.540 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.108 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.367 [229/268] Linking target lib/librte_eal.so.24.1 00:06:24.633 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:24.633 [231/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:24.633 [232/268] Linking target lib/librte_timer.so.24.1 00:06:24.633 [233/268] Linking target lib/librte_ring.so.24.1 00:06:24.633 [234/268] Linking target lib/librte_dmadev.so.24.1 00:06:24.633 [235/268] Linking target lib/librte_pci.so.24.1 00:06:24.633 [236/268] Linking target lib/librte_meter.so.24.1 00:06:24.633 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:24.633 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:24.905 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:24.905 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:24.905 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:24.905 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:24.905 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:24.905 [244/268] Linking target lib/librte_rcu.so.24.1 00:06:24.905 [245/268] Linking target lib/librte_mempool.so.24.1 00:06:25.163 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:25.163 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:25.163 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:25.163 [249/268] Linking target lib/librte_mbuf.so.24.1 00:06:25.422 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:25.422 [251/268] Linking target lib/librte_reorder.so.24.1 00:06:25.422 [252/268] Linking target lib/librte_net.so.24.1 00:06:25.422 [253/268] Linking target lib/librte_compressdev.so.24.1 00:06:25.422 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:06:25.681 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:25.681 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:25.681 [257/268] Linking target lib/librte_cmdline.so.24.1 00:06:25.681 [258/268] Linking target lib/librte_hash.so.24.1 00:06:25.681 [259/268] Linking target lib/librte_security.so.24.1 00:06:25.681 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:25.940 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.199 [262/268] Linking target lib/librte_ethdev.so.24.1 00:06:26.459 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:26.459 [264/268] Linking target lib/librte_power.so.24.1 00:06:29.758 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:29.758 [266/268] Linking static target lib/librte_vhost.a 00:06:30.744 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:31.002 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:31.002 INFO: autodetecting backend as ninja 00:06:31.002 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:57.568 CC lib/ut_mock/mock.o 00:06:57.568 CC lib/ut/ut.o 00:06:57.568 CC lib/log/log.o 00:06:57.568 CC lib/log/log_flags.o 00:06:57.568 CC lib/log/log_deprecated.o 00:06:57.568 LIB libspdk_ut_mock.a 00:06:57.568 SO libspdk_ut_mock.so.6.0 00:06:57.568 LIB libspdk_ut.a 00:06:57.568 SYMLINK libspdk_ut_mock.so 00:06:57.568 LIB libspdk_log.a 00:06:57.568 SO libspdk_ut.so.2.0 00:06:57.568 SO libspdk_log.so.7.1 00:06:57.568 SYMLINK libspdk_ut.so 00:06:57.568 SYMLINK libspdk_log.so 00:06:57.568 CXX lib/trace_parser/trace.o 00:06:57.568 CC lib/ioat/ioat.o 00:06:57.568 CC lib/dma/dma.o 00:06:57.568 CC lib/util/base64.o 00:06:57.568 CC lib/util/bit_array.o 00:06:57.568 CC lib/util/cpuset.o 00:06:57.568 CC lib/util/crc16.o 00:06:57.568 CC lib/util/crc32.o 00:06:57.568 CC lib/util/crc32c.o 00:06:57.568 CC lib/vfio_user/host/vfio_user_pci.o 00:06:57.568 CC lib/vfio_user/host/vfio_user.o 00:06:57.568 CC lib/util/crc32_ieee.o 00:06:57.568 CC lib/util/crc64.o 00:06:57.568 CC lib/util/dif.o 00:06:57.568 LIB libspdk_dma.a 00:06:57.568 CC lib/util/fd.o 00:06:57.568 SO libspdk_dma.so.5.0 00:06:57.568 CC lib/util/fd_group.o 00:06:57.568 CC lib/util/file.o 00:06:57.568 SYMLINK libspdk_dma.so 00:06:57.568 CC lib/util/hexlify.o 00:06:57.568 CC lib/util/iov.o 00:06:57.568 LIB libspdk_ioat.a 00:06:57.568 CC lib/util/math.o 00:06:57.568 SO libspdk_ioat.so.7.0 00:06:57.568 LIB libspdk_vfio_user.a 00:06:57.568 CC lib/util/net.o 00:06:57.568 SO libspdk_vfio_user.so.5.0 00:06:57.568 SYMLINK libspdk_ioat.so 00:06:57.568 CC lib/util/pipe.o 00:06:57.568 CC lib/util/strerror_tls.o 00:06:57.568 CC lib/util/string.o 00:06:57.568 CC lib/util/uuid.o 00:06:57.568 SYMLINK libspdk_vfio_user.so 00:06:57.568 CC lib/util/xor.o 00:06:57.568 CC lib/util/zipf.o 00:06:57.568 CC lib/util/md5.o 00:06:57.568 LIB libspdk_util.a 00:06:57.568 SO libspdk_util.so.10.0 00:06:57.568 SYMLINK libspdk_util.so 00:06:57.568 LIB libspdk_trace_parser.a 00:06:57.568 CC lib/vmd/vmd.o 00:06:57.568 CC lib/vmd/led.o 00:06:57.568 CC lib/rdma_utils/rdma_utils.o 00:06:57.568 CC lib/json/json_util.o 00:06:57.568 CC lib/rdma_provider/common.o 00:06:57.568 CC lib/json/json_parse.o 00:06:57.568 CC lib/env_dpdk/env.o 00:06:57.568 CC lib/idxd/idxd.o 00:06:57.568 CC lib/conf/conf.o 00:06:57.568 SO libspdk_trace_parser.so.6.0 00:06:57.826 SYMLINK libspdk_trace_parser.so 00:06:57.827 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:57.827 CC lib/json/json_write.o 00:06:57.827 CC lib/env_dpdk/memory.o 00:06:57.827 CC lib/idxd/idxd_user.o 00:06:57.827 LIB libspdk_rdma_utils.a 00:06:58.083 SO libspdk_rdma_utils.so.1.0 00:06:58.083 CC lib/env_dpdk/pci.o 00:06:58.083 LIB libspdk_conf.a 00:06:58.083 SYMLINK libspdk_rdma_utils.so 00:06:58.083 CC lib/env_dpdk/init.o 00:06:58.083 SO libspdk_conf.so.6.0 00:06:58.083 LIB libspdk_rdma_provider.a 00:06:58.083 SO libspdk_rdma_provider.so.6.0 00:06:58.083 SYMLINK libspdk_conf.so 00:06:58.083 CC lib/env_dpdk/threads.o 00:06:58.083 SYMLINK libspdk_rdma_provider.so 00:06:58.083 CC lib/env_dpdk/pci_ioat.o 00:06:58.339 CC lib/idxd/idxd_kernel.o 00:06:58.339 CC lib/env_dpdk/pci_virtio.o 00:06:58.339 LIB libspdk_json.a 00:06:58.339 CC lib/env_dpdk/pci_vmd.o 00:06:58.339 CC lib/env_dpdk/pci_idxd.o 00:06:58.596 CC lib/env_dpdk/pci_event.o 00:06:58.596 SO libspdk_json.so.6.0 00:06:58.596 CC lib/env_dpdk/sigbus_handler.o 00:06:58.596 LIB libspdk_vmd.a 00:06:58.596 CC lib/env_dpdk/pci_dpdk.o 00:06:58.596 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:58.596 SYMLINK libspdk_json.so 00:06:58.596 SO libspdk_vmd.so.6.0 00:06:58.596 SYMLINK libspdk_vmd.so 00:06:58.854 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:58.854 CC lib/jsonrpc/jsonrpc_client.o 00:06:58.854 CC lib/jsonrpc/jsonrpc_server.o 00:06:58.854 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:58.854 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:58.854 LIB libspdk_idxd.a 00:06:59.112 SO libspdk_idxd.so.12.1 00:06:59.112 SYMLINK libspdk_idxd.so 00:06:59.112 LIB libspdk_jsonrpc.a 00:06:59.369 SO libspdk_jsonrpc.so.6.0 00:06:59.369 SYMLINK libspdk_jsonrpc.so 00:06:59.628 CC lib/rpc/rpc.o 00:06:59.886 LIB libspdk_rpc.a 00:06:59.886 SO libspdk_rpc.so.6.0 00:06:59.886 SYMLINK libspdk_rpc.so 00:07:00.144 CC lib/trace/trace.o 00:07:00.144 CC lib/trace/trace_flags.o 00:07:00.144 CC lib/trace/trace_rpc.o 00:07:00.144 CC lib/keyring/keyring.o 00:07:00.144 CC lib/keyring/keyring_rpc.o 00:07:00.144 CC lib/notify/notify.o 00:07:00.144 CC lib/notify/notify_rpc.o 00:07:00.402 LIB libspdk_env_dpdk.a 00:07:00.402 SO libspdk_env_dpdk.so.15.1 00:07:00.402 LIB libspdk_notify.a 00:07:00.660 SO libspdk_notify.so.6.0 00:07:00.660 LIB libspdk_keyring.a 00:07:00.660 SO libspdk_keyring.so.2.0 00:07:00.660 SYMLINK libspdk_notify.so 00:07:00.660 LIB libspdk_trace.a 00:07:00.660 SYMLINK libspdk_env_dpdk.so 00:07:00.660 SO libspdk_trace.so.11.0 00:07:00.660 SYMLINK libspdk_keyring.so 00:07:00.917 SYMLINK libspdk_trace.so 00:07:01.175 CC lib/thread/thread.o 00:07:01.175 CC lib/sock/sock.o 00:07:01.175 CC lib/thread/iobuf.o 00:07:01.175 CC lib/sock/sock_rpc.o 00:07:01.743 LIB libspdk_sock.a 00:07:01.743 SO libspdk_sock.so.10.0 00:07:01.743 SYMLINK libspdk_sock.so 00:07:02.000 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:02.000 CC lib/nvme/nvme_ctrlr.o 00:07:02.000 CC lib/nvme/nvme_fabric.o 00:07:02.000 CC lib/nvme/nvme_ns_cmd.o 00:07:02.000 CC lib/nvme/nvme_ns.o 00:07:02.000 CC lib/nvme/nvme_pcie.o 00:07:02.000 CC lib/nvme/nvme.o 00:07:02.000 CC lib/nvme/nvme_pcie_common.o 00:07:02.000 CC lib/nvme/nvme_qpair.o 00:07:03.380 CC lib/nvme/nvme_quirks.o 00:07:03.380 CC lib/nvme/nvme_transport.o 00:07:03.380 LIB libspdk_thread.a 00:07:03.380 CC lib/nvme/nvme_discovery.o 00:07:03.380 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:03.380 SO libspdk_thread.so.11.0 00:07:03.639 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:03.639 SYMLINK libspdk_thread.so 00:07:03.639 CC lib/nvme/nvme_tcp.o 00:07:03.639 CC lib/nvme/nvme_opal.o 00:07:03.898 CC lib/nvme/nvme_io_msg.o 00:07:04.156 CC lib/nvme/nvme_poll_group.o 00:07:04.156 CC lib/accel/accel.o 00:07:04.415 CC lib/accel/accel_rpc.o 00:07:04.415 CC lib/blob/blobstore.o 00:07:04.415 CC lib/accel/accel_sw.o 00:07:04.982 CC lib/blob/request.o 00:07:04.982 CC lib/init/json_config.o 00:07:04.982 CC lib/blob/zeroes.o 00:07:05.240 CC lib/virtio/virtio.o 00:07:05.240 CC lib/virtio/virtio_vhost_user.o 00:07:05.240 CC lib/virtio/virtio_vfio_user.o 00:07:05.499 CC lib/virtio/virtio_pci.o 00:07:05.499 CC lib/init/subsystem.o 00:07:05.757 CC lib/blob/blob_bs_dev.o 00:07:05.757 CC lib/nvme/nvme_zns.o 00:07:05.757 CC lib/init/subsystem_rpc.o 00:07:06.014 CC lib/init/rpc.o 00:07:06.014 LIB libspdk_virtio.a 00:07:06.014 CC lib/fsdev/fsdev.o 00:07:06.014 SO libspdk_virtio.so.7.0 00:07:06.014 CC lib/fsdev/fsdev_io.o 00:07:06.014 CC lib/fsdev/fsdev_rpc.o 00:07:06.272 CC lib/nvme/nvme_stubs.o 00:07:06.273 SYMLINK libspdk_virtio.so 00:07:06.273 LIB libspdk_init.a 00:07:06.273 CC lib/nvme/nvme_auth.o 00:07:06.273 SO libspdk_init.so.6.0 00:07:06.531 CC lib/nvme/nvme_cuse.o 00:07:06.531 SYMLINK libspdk_init.so 00:07:06.531 CC lib/nvme/nvme_rdma.o 00:07:06.789 CC lib/event/app.o 00:07:06.789 LIB libspdk_accel.a 00:07:07.048 CC lib/event/reactor.o 00:07:07.048 CC lib/event/log_rpc.o 00:07:07.048 SO libspdk_accel.so.16.0 00:07:07.048 LIB libspdk_fsdev.a 00:07:07.048 SYMLINK libspdk_accel.so 00:07:07.048 CC lib/event/app_rpc.o 00:07:07.048 SO libspdk_fsdev.so.2.0 00:07:07.306 CC lib/event/scheduler_static.o 00:07:07.306 SYMLINK libspdk_fsdev.so 00:07:07.564 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:07.564 CC lib/bdev/bdev.o 00:07:07.564 CC lib/bdev/bdev_rpc.o 00:07:07.564 CC lib/bdev/bdev_zone.o 00:07:07.822 CC lib/bdev/part.o 00:07:07.822 LIB libspdk_event.a 00:07:07.822 SO libspdk_event.so.14.0 00:07:07.822 SYMLINK libspdk_event.so 00:07:07.822 CC lib/bdev/scsi_nvme.o 00:07:08.755 LIB libspdk_fuse_dispatcher.a 00:07:08.755 SO libspdk_fuse_dispatcher.so.1.0 00:07:09.014 SYMLINK libspdk_fuse_dispatcher.so 00:07:09.014 LIB libspdk_nvme.a 00:07:09.598 SO libspdk_nvme.so.14.1 00:07:09.858 SYMLINK libspdk_nvme.so 00:07:10.795 LIB libspdk_blob.a 00:07:10.795 SO libspdk_blob.so.11.0 00:07:10.795 SYMLINK libspdk_blob.so 00:07:11.053 CC lib/blobfs/blobfs.o 00:07:11.053 CC lib/blobfs/tree.o 00:07:11.053 CC lib/lvol/lvol.o 00:07:11.311 LIB libspdk_bdev.a 00:07:11.570 SO libspdk_bdev.so.17.0 00:07:11.570 SYMLINK libspdk_bdev.so 00:07:11.829 CC lib/nbd/nbd.o 00:07:11.829 CC lib/nvmf/ctrlr.o 00:07:11.829 CC lib/scsi/dev.o 00:07:11.829 CC lib/scsi/lun.o 00:07:11.829 CC lib/scsi/port.o 00:07:11.829 CC lib/scsi/scsi.o 00:07:11.829 CC lib/ftl/ftl_core.o 00:07:11.829 CC lib/ublk/ublk.o 00:07:12.087 CC lib/ublk/ublk_rpc.o 00:07:12.087 CC lib/nvmf/ctrlr_discovery.o 00:07:12.345 CC lib/nvmf/ctrlr_bdev.o 00:07:12.345 CC lib/scsi/scsi_bdev.o 00:07:12.345 CC lib/nbd/nbd_rpc.o 00:07:12.604 LIB libspdk_blobfs.a 00:07:12.604 CC lib/ftl/ftl_init.o 00:07:12.604 SO libspdk_blobfs.so.10.0 00:07:12.604 SYMLINK libspdk_blobfs.so 00:07:12.604 LIB libspdk_nbd.a 00:07:12.604 CC lib/ftl/ftl_layout.o 00:07:12.604 SO libspdk_nbd.so.7.0 00:07:12.862 CC lib/ftl/ftl_debug.o 00:07:12.862 SYMLINK libspdk_nbd.so 00:07:12.862 CC lib/nvmf/subsystem.o 00:07:12.862 LIB libspdk_lvol.a 00:07:12.862 SO libspdk_lvol.so.10.0 00:07:12.862 CC lib/nvmf/nvmf.o 00:07:13.122 SYMLINK libspdk_lvol.so 00:07:13.122 CC lib/nvmf/nvmf_rpc.o 00:07:13.122 CC lib/scsi/scsi_pr.o 00:07:13.122 CC lib/ftl/ftl_io.o 00:07:13.122 CC lib/ftl/ftl_sb.o 00:07:13.379 CC lib/ftl/ftl_l2p.o 00:07:13.379 CC lib/nvmf/transport.o 00:07:13.379 LIB libspdk_ublk.a 00:07:13.379 CC lib/ftl/ftl_l2p_flat.o 00:07:13.379 SO libspdk_ublk.so.3.0 00:07:13.379 CC lib/scsi/scsi_rpc.o 00:07:13.638 SYMLINK libspdk_ublk.so 00:07:13.638 CC lib/scsi/task.o 00:07:13.638 CC lib/nvmf/tcp.o 00:07:13.638 CC lib/nvmf/stubs.o 00:07:13.638 CC lib/ftl/ftl_nv_cache.o 00:07:13.896 CC lib/nvmf/mdns_server.o 00:07:13.896 LIB libspdk_scsi.a 00:07:14.155 SO libspdk_scsi.so.9.0 00:07:14.155 SYMLINK libspdk_scsi.so 00:07:14.155 CC lib/nvmf/rdma.o 00:07:14.412 CC lib/vhost/vhost.o 00:07:14.412 CC lib/iscsi/conn.o 00:07:14.979 CC lib/iscsi/init_grp.o 00:07:14.979 CC lib/vhost/vhost_rpc.o 00:07:15.239 CC lib/nvmf/auth.o 00:07:15.239 CC lib/vhost/vhost_scsi.o 00:07:15.498 CC lib/iscsi/iscsi.o 00:07:15.498 CC lib/ftl/ftl_band.o 00:07:15.755 CC lib/vhost/vhost_blk.o 00:07:15.755 CC lib/iscsi/param.o 00:07:15.755 CC lib/iscsi/portal_grp.o 00:07:16.013 CC lib/iscsi/tgt_node.o 00:07:16.270 CC lib/iscsi/iscsi_subsystem.o 00:07:16.270 CC lib/iscsi/iscsi_rpc.o 00:07:16.270 CC lib/ftl/ftl_band_ops.o 00:07:16.835 CC lib/ftl/ftl_writer.o 00:07:16.835 CC lib/iscsi/task.o 00:07:16.835 CC lib/vhost/rte_vhost_user.o 00:07:17.093 CC lib/ftl/ftl_rq.o 00:07:17.093 CC lib/ftl/ftl_reloc.o 00:07:17.093 CC lib/ftl/ftl_l2p_cache.o 00:07:17.093 CC lib/ftl/ftl_p2l.o 00:07:17.093 CC lib/ftl/ftl_p2l_log.o 00:07:17.350 CC lib/ftl/mngt/ftl_mngt.o 00:07:17.608 LIB libspdk_iscsi.a 00:07:17.608 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:17.608 SO libspdk_iscsi.so.8.0 00:07:17.608 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:17.608 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:17.865 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:17.865 SYMLINK libspdk_iscsi.so 00:07:17.865 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:17.865 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:18.124 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:18.124 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:18.124 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:18.124 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:18.124 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:18.124 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:18.381 CC lib/ftl/utils/ftl_conf.o 00:07:18.381 LIB libspdk_vhost.a 00:07:18.381 CC lib/ftl/utils/ftl_md.o 00:07:18.381 CC lib/ftl/utils/ftl_mempool.o 00:07:18.381 SO libspdk_vhost.so.8.0 00:07:18.381 CC lib/ftl/utils/ftl_bitmap.o 00:07:18.638 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:18.638 CC lib/ftl/utils/ftl_property.o 00:07:18.639 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:18.639 SYMLINK libspdk_vhost.so 00:07:18.639 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:18.639 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:18.639 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:18.639 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:18.897 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:18.897 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:18.897 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:18.897 LIB libspdk_nvmf.a 00:07:18.897 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:18.897 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:19.155 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:19.155 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:19.155 CC lib/ftl/base/ftl_base_dev.o 00:07:19.155 CC lib/ftl/base/ftl_base_bdev.o 00:07:19.155 CC lib/ftl/ftl_trace.o 00:07:19.155 SO libspdk_nvmf.so.20.0 00:07:19.721 SYMLINK libspdk_nvmf.so 00:07:19.721 LIB libspdk_ftl.a 00:07:19.984 SO libspdk_ftl.so.9.0 00:07:20.245 SYMLINK libspdk_ftl.so 00:07:20.812 CC module/env_dpdk/env_dpdk_rpc.o 00:07:20.812 CC module/accel/dsa/accel_dsa.o 00:07:20.812 CC module/accel/ioat/accel_ioat.o 00:07:20.812 CC module/blob/bdev/blob_bdev.o 00:07:20.812 CC module/keyring/file/keyring.o 00:07:20.812 CC module/sock/posix/posix.o 00:07:20.812 CC module/accel/iaa/accel_iaa.o 00:07:20.812 CC module/fsdev/aio/fsdev_aio.o 00:07:20.812 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:20.812 CC module/accel/error/accel_error.o 00:07:20.812 LIB libspdk_env_dpdk_rpc.a 00:07:21.070 SO libspdk_env_dpdk_rpc.so.6.0 00:07:21.070 CC module/accel/ioat/accel_ioat_rpc.o 00:07:21.070 CC module/keyring/file/keyring_rpc.o 00:07:21.070 LIB libspdk_scheduler_dynamic.a 00:07:21.070 CC module/accel/error/accel_error_rpc.o 00:07:21.070 SYMLINK libspdk_env_dpdk_rpc.so 00:07:21.070 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:21.070 SO libspdk_scheduler_dynamic.so.4.0 00:07:21.330 CC module/accel/iaa/accel_iaa_rpc.o 00:07:21.330 CC module/accel/dsa/accel_dsa_rpc.o 00:07:21.330 LIB libspdk_accel_ioat.a 00:07:21.330 SYMLINK libspdk_scheduler_dynamic.so 00:07:21.330 LIB libspdk_blob_bdev.a 00:07:21.330 LIB libspdk_accel_error.a 00:07:21.330 SO libspdk_accel_ioat.so.6.0 00:07:21.330 LIB libspdk_keyring_file.a 00:07:21.330 SO libspdk_blob_bdev.so.11.0 00:07:21.330 SO libspdk_accel_error.so.2.0 00:07:21.330 SO libspdk_keyring_file.so.2.0 00:07:21.589 SYMLINK libspdk_accel_ioat.so 00:07:21.589 CC module/fsdev/aio/linux_aio_mgr.o 00:07:21.589 LIB libspdk_accel_dsa.a 00:07:21.589 SYMLINK libspdk_accel_error.so 00:07:21.589 SYMLINK libspdk_keyring_file.so 00:07:21.589 SYMLINK libspdk_blob_bdev.so 00:07:21.589 LIB libspdk_accel_iaa.a 00:07:21.589 SO libspdk_accel_dsa.so.5.0 00:07:21.589 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:21.589 SO libspdk_accel_iaa.so.3.0 00:07:21.589 SYMLINK libspdk_accel_dsa.so 00:07:21.589 SYMLINK libspdk_accel_iaa.so 00:07:21.847 CC module/keyring/linux/keyring.o 00:07:21.847 CC module/scheduler/gscheduler/gscheduler.o 00:07:21.847 CC module/bdev/delay/vbdev_delay.o 00:07:21.847 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:21.847 CC module/bdev/gpt/gpt.o 00:07:21.847 LIB libspdk_scheduler_dpdk_governor.a 00:07:21.847 CC module/blobfs/bdev/blobfs_bdev.o 00:07:21.847 CC module/bdev/error/vbdev_error.o 00:07:21.847 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:22.106 LIB libspdk_fsdev_aio.a 00:07:22.106 CC module/keyring/linux/keyring_rpc.o 00:07:22.106 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:22.106 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:22.106 LIB libspdk_scheduler_gscheduler.a 00:07:22.106 SO libspdk_scheduler_gscheduler.so.4.0 00:07:22.106 SO libspdk_fsdev_aio.so.1.0 00:07:22.106 CC module/bdev/gpt/vbdev_gpt.o 00:07:22.106 SYMLINK libspdk_fsdev_aio.so 00:07:22.106 SYMLINK libspdk_scheduler_gscheduler.so 00:07:22.364 LIB libspdk_keyring_linux.a 00:07:22.364 CC module/bdev/error/vbdev_error_rpc.o 00:07:22.364 SO libspdk_keyring_linux.so.1.0 00:07:22.364 SYMLINK libspdk_keyring_linux.so 00:07:22.364 LIB libspdk_blobfs_bdev.a 00:07:22.364 LIB libspdk_sock_posix.a 00:07:22.622 CC module/bdev/lvol/vbdev_lvol.o 00:07:22.622 SO libspdk_blobfs_bdev.so.6.0 00:07:22.622 SO libspdk_sock_posix.so.6.0 00:07:22.622 LIB libspdk_bdev_error.a 00:07:22.622 CC module/bdev/malloc/bdev_malloc.o 00:07:22.622 SO libspdk_bdev_error.so.6.0 00:07:22.622 CC module/bdev/null/bdev_null.o 00:07:22.622 SYMLINK libspdk_blobfs_bdev.so 00:07:22.622 SYMLINK libspdk_sock_posix.so 00:07:22.622 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:22.622 LIB libspdk_bdev_delay.a 00:07:22.622 SYMLINK libspdk_bdev_error.so 00:07:22.622 CC module/bdev/nvme/bdev_nvme.o 00:07:22.622 LIB libspdk_bdev_gpt.a 00:07:22.622 SO libspdk_bdev_delay.so.6.0 00:07:22.880 CC module/bdev/passthru/vbdev_passthru.o 00:07:22.880 SO libspdk_bdev_gpt.so.6.0 00:07:22.880 SYMLINK libspdk_bdev_delay.so 00:07:22.880 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:22.880 CC module/bdev/raid/bdev_raid.o 00:07:22.880 SYMLINK libspdk_bdev_gpt.so 00:07:22.880 CC module/bdev/raid/bdev_raid_rpc.o 00:07:22.880 CC module/bdev/split/vbdev_split.o 00:07:22.880 CC module/bdev/null/bdev_null_rpc.o 00:07:23.139 CC module/bdev/raid/bdev_raid_sb.o 00:07:23.139 CC module/bdev/split/vbdev_split_rpc.o 00:07:23.139 CC module/bdev/raid/raid0.o 00:07:23.399 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:23.399 LIB libspdk_bdev_null.a 00:07:23.399 LIB libspdk_bdev_lvol.a 00:07:23.399 SO libspdk_bdev_lvol.so.6.0 00:07:23.399 SO libspdk_bdev_null.so.6.0 00:07:23.399 LIB libspdk_bdev_split.a 00:07:23.399 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:23.399 SO libspdk_bdev_split.so.6.0 00:07:23.399 SYMLINK libspdk_bdev_lvol.so 00:07:23.399 LIB libspdk_bdev_malloc.a 00:07:23.657 SYMLINK libspdk_bdev_null.so 00:07:23.658 CC module/bdev/nvme/nvme_rpc.o 00:07:23.658 SYMLINK libspdk_bdev_split.so 00:07:23.658 CC module/bdev/raid/raid1.o 00:07:23.658 SO libspdk_bdev_malloc.so.6.0 00:07:23.658 CC module/bdev/raid/concat.o 00:07:23.658 CC module/bdev/raid/raid5f.o 00:07:23.658 LIB libspdk_bdev_passthru.a 00:07:23.658 SYMLINK libspdk_bdev_malloc.so 00:07:23.658 SO libspdk_bdev_passthru.so.6.0 00:07:23.658 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:23.917 SYMLINK libspdk_bdev_passthru.so 00:07:23.917 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:23.917 CC module/bdev/aio/bdev_aio.o 00:07:23.917 CC module/bdev/nvme/bdev_mdns_client.o 00:07:24.176 CC module/bdev/nvme/vbdev_opal.o 00:07:24.176 CC module/bdev/ftl/bdev_ftl.o 00:07:24.176 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:24.176 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:24.176 LIB libspdk_bdev_zone_block.a 00:07:24.435 SO libspdk_bdev_zone_block.so.6.0 00:07:24.435 CC module/bdev/iscsi/bdev_iscsi.o 00:07:24.435 SYMLINK libspdk_bdev_zone_block.so 00:07:24.435 LIB libspdk_bdev_raid.a 00:07:24.435 CC module/bdev/aio/bdev_aio_rpc.o 00:07:24.435 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:24.435 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:24.694 SO libspdk_bdev_raid.so.6.0 00:07:24.694 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:24.694 LIB libspdk_bdev_ftl.a 00:07:24.694 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:24.694 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:24.694 SO libspdk_bdev_ftl.so.6.0 00:07:24.694 LIB libspdk_bdev_aio.a 00:07:24.694 SYMLINK libspdk_bdev_raid.so 00:07:24.694 SYMLINK libspdk_bdev_ftl.so 00:07:24.694 SO libspdk_bdev_aio.so.6.0 00:07:24.953 SYMLINK libspdk_bdev_aio.so 00:07:25.211 LIB libspdk_bdev_iscsi.a 00:07:25.211 SO libspdk_bdev_iscsi.so.6.0 00:07:25.211 SYMLINK libspdk_bdev_iscsi.so 00:07:25.469 LIB libspdk_bdev_virtio.a 00:07:25.469 SO libspdk_bdev_virtio.so.6.0 00:07:25.727 SYMLINK libspdk_bdev_virtio.so 00:07:26.663 LIB libspdk_bdev_nvme.a 00:07:26.921 SO libspdk_bdev_nvme.so.7.1 00:07:26.921 SYMLINK libspdk_bdev_nvme.so 00:07:27.489 CC module/event/subsystems/vmd/vmd.o 00:07:27.489 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:27.489 CC module/event/subsystems/sock/sock.o 00:07:27.489 CC module/event/subsystems/scheduler/scheduler.o 00:07:27.489 CC module/event/subsystems/iobuf/iobuf.o 00:07:27.489 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:27.489 CC module/event/subsystems/fsdev/fsdev.o 00:07:27.489 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:27.489 CC module/event/subsystems/keyring/keyring.o 00:07:27.489 LIB libspdk_event_keyring.a 00:07:27.489 LIB libspdk_event_sock.a 00:07:27.489 LIB libspdk_event_scheduler.a 00:07:27.489 SO libspdk_event_keyring.so.1.0 00:07:27.489 LIB libspdk_event_vhost_blk.a 00:07:27.489 LIB libspdk_event_vmd.a 00:07:27.489 SO libspdk_event_scheduler.so.4.0 00:07:27.748 LIB libspdk_event_fsdev.a 00:07:27.748 LIB libspdk_event_iobuf.a 00:07:27.748 SO libspdk_event_sock.so.5.0 00:07:27.748 SO libspdk_event_vhost_blk.so.3.0 00:07:27.748 SO libspdk_event_vmd.so.6.0 00:07:27.748 SO libspdk_event_fsdev.so.1.0 00:07:27.748 SO libspdk_event_iobuf.so.3.0 00:07:27.748 SYMLINK libspdk_event_keyring.so 00:07:27.748 SYMLINK libspdk_event_scheduler.so 00:07:27.748 SYMLINK libspdk_event_sock.so 00:07:27.748 SYMLINK libspdk_event_vhost_blk.so 00:07:27.748 SYMLINK libspdk_event_vmd.so 00:07:27.748 SYMLINK libspdk_event_iobuf.so 00:07:27.748 SYMLINK libspdk_event_fsdev.so 00:07:28.006 CC module/event/subsystems/accel/accel.o 00:07:28.265 LIB libspdk_event_accel.a 00:07:28.265 SO libspdk_event_accel.so.6.0 00:07:28.265 SYMLINK libspdk_event_accel.so 00:07:28.522 CC module/event/subsystems/bdev/bdev.o 00:07:28.781 LIB libspdk_event_bdev.a 00:07:28.781 SO libspdk_event_bdev.so.6.0 00:07:29.039 SYMLINK libspdk_event_bdev.so 00:07:29.039 CC module/event/subsystems/scsi/scsi.o 00:07:29.039 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:29.039 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:29.039 CC module/event/subsystems/nbd/nbd.o 00:07:29.039 CC module/event/subsystems/ublk/ublk.o 00:07:29.298 LIB libspdk_event_nbd.a 00:07:29.298 LIB libspdk_event_ublk.a 00:07:29.298 LIB libspdk_event_scsi.a 00:07:29.556 SO libspdk_event_ublk.so.3.0 00:07:29.556 SO libspdk_event_nbd.so.6.0 00:07:29.556 SO libspdk_event_scsi.so.6.0 00:07:29.556 SYMLINK libspdk_event_ublk.so 00:07:29.556 SYMLINK libspdk_event_nbd.so 00:07:29.556 SYMLINK libspdk_event_scsi.so 00:07:29.556 LIB libspdk_event_nvmf.a 00:07:29.556 SO libspdk_event_nvmf.so.6.0 00:07:29.814 SYMLINK libspdk_event_nvmf.so 00:07:29.814 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:29.814 CC module/event/subsystems/iscsi/iscsi.o 00:07:29.814 LIB libspdk_event_vhost_scsi.a 00:07:30.072 SO libspdk_event_vhost_scsi.so.3.0 00:07:30.072 LIB libspdk_event_iscsi.a 00:07:30.072 SYMLINK libspdk_event_vhost_scsi.so 00:07:30.072 SO libspdk_event_iscsi.so.6.0 00:07:30.072 SYMLINK libspdk_event_iscsi.so 00:07:30.330 SO libspdk.so.6.0 00:07:30.330 SYMLINK libspdk.so 00:07:30.589 CXX app/trace/trace.o 00:07:30.589 CC app/spdk_nvme_perf/perf.o 00:07:30.589 CC app/trace_record/trace_record.o 00:07:30.589 CC app/spdk_nvme_identify/identify.o 00:07:30.589 CC app/spdk_lspci/spdk_lspci.o 00:07:30.589 CC app/iscsi_tgt/iscsi_tgt.o 00:07:30.589 CC app/nvmf_tgt/nvmf_main.o 00:07:30.848 CC app/spdk_tgt/spdk_tgt.o 00:07:30.848 CC examples/util/zipf/zipf.o 00:07:30.848 CC test/thread/poller_perf/poller_perf.o 00:07:30.848 LINK spdk_lspci 00:07:31.107 LINK zipf 00:07:31.107 LINK nvmf_tgt 00:07:31.107 LINK iscsi_tgt 00:07:31.107 LINK spdk_tgt 00:07:31.107 LINK poller_perf 00:07:31.365 LINK spdk_trace_record 00:07:31.365 LINK spdk_trace 00:07:31.623 CC test/dma/test_dma/test_dma.o 00:07:31.623 CC examples/ioat/perf/perf.o 00:07:31.623 CC examples/ioat/verify/verify.o 00:07:31.623 CC app/spdk_nvme_discover/discovery_aer.o 00:07:31.880 CC examples/vmd/lsvmd/lsvmd.o 00:07:31.880 CC examples/idxd/perf/perf.o 00:07:31.880 CC test/app/bdev_svc/bdev_svc.o 00:07:32.139 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:32.139 LINK lsvmd 00:07:32.139 LINK verify 00:07:32.139 LINK spdk_nvme_discover 00:07:32.139 LINK ioat_perf 00:07:32.397 LINK bdev_svc 00:07:32.397 LINK spdk_nvme_perf 00:07:32.397 LINK interrupt_tgt 00:07:32.655 LINK spdk_nvme_identify 00:07:32.655 CC app/spdk_top/spdk_top.o 00:07:32.655 LINK idxd_perf 00:07:32.655 CC examples/vmd/led/led.o 00:07:32.655 CC test/app/histogram_perf/histogram_perf.o 00:07:32.655 LINK test_dma 00:07:32.913 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:32.913 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:32.913 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:32.913 LINK led 00:07:32.913 LINK histogram_perf 00:07:32.913 TEST_HEADER include/spdk/accel.h 00:07:32.913 TEST_HEADER include/spdk/accel_module.h 00:07:32.913 TEST_HEADER include/spdk/assert.h 00:07:32.913 TEST_HEADER include/spdk/barrier.h 00:07:32.913 TEST_HEADER include/spdk/base64.h 00:07:32.913 TEST_HEADER include/spdk/bdev.h 00:07:32.913 TEST_HEADER include/spdk/bdev_module.h 00:07:32.913 TEST_HEADER include/spdk/bdev_zone.h 00:07:32.913 TEST_HEADER include/spdk/bit_array.h 00:07:32.913 TEST_HEADER include/spdk/bit_pool.h 00:07:33.198 TEST_HEADER include/spdk/blob_bdev.h 00:07:33.198 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:33.198 TEST_HEADER include/spdk/blobfs.h 00:07:33.198 TEST_HEADER include/spdk/blob.h 00:07:33.198 TEST_HEADER include/spdk/conf.h 00:07:33.198 TEST_HEADER include/spdk/config.h 00:07:33.198 TEST_HEADER include/spdk/cpuset.h 00:07:33.198 TEST_HEADER include/spdk/crc16.h 00:07:33.198 TEST_HEADER include/spdk/crc32.h 00:07:33.198 TEST_HEADER include/spdk/crc64.h 00:07:33.198 TEST_HEADER include/spdk/dif.h 00:07:33.198 TEST_HEADER include/spdk/dma.h 00:07:33.198 TEST_HEADER include/spdk/endian.h 00:07:33.198 TEST_HEADER include/spdk/env_dpdk.h 00:07:33.198 TEST_HEADER include/spdk/env.h 00:07:33.198 TEST_HEADER include/spdk/event.h 00:07:33.198 TEST_HEADER include/spdk/fd_group.h 00:07:33.198 TEST_HEADER include/spdk/fd.h 00:07:33.198 TEST_HEADER include/spdk/file.h 00:07:33.198 TEST_HEADER include/spdk/fsdev.h 00:07:33.198 TEST_HEADER include/spdk/fsdev_module.h 00:07:33.198 TEST_HEADER include/spdk/ftl.h 00:07:33.198 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:33.198 TEST_HEADER include/spdk/gpt_spec.h 00:07:33.198 TEST_HEADER include/spdk/hexlify.h 00:07:33.198 TEST_HEADER include/spdk/histogram_data.h 00:07:33.198 TEST_HEADER include/spdk/idxd.h 00:07:33.198 TEST_HEADER include/spdk/idxd_spec.h 00:07:33.198 CC examples/thread/thread/thread_ex.o 00:07:33.198 TEST_HEADER include/spdk/init.h 00:07:33.198 TEST_HEADER include/spdk/ioat.h 00:07:33.198 TEST_HEADER include/spdk/ioat_spec.h 00:07:33.198 TEST_HEADER include/spdk/iscsi_spec.h 00:07:33.198 TEST_HEADER include/spdk/json.h 00:07:33.198 TEST_HEADER include/spdk/jsonrpc.h 00:07:33.198 TEST_HEADER include/spdk/keyring.h 00:07:33.198 TEST_HEADER include/spdk/keyring_module.h 00:07:33.198 TEST_HEADER include/spdk/likely.h 00:07:33.198 TEST_HEADER include/spdk/log.h 00:07:33.198 TEST_HEADER include/spdk/lvol.h 00:07:33.198 TEST_HEADER include/spdk/md5.h 00:07:33.198 TEST_HEADER include/spdk/memory.h 00:07:33.198 TEST_HEADER include/spdk/mmio.h 00:07:33.198 TEST_HEADER include/spdk/nbd.h 00:07:33.198 TEST_HEADER include/spdk/net.h 00:07:33.198 TEST_HEADER include/spdk/notify.h 00:07:33.198 TEST_HEADER include/spdk/nvme.h 00:07:33.198 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:33.198 TEST_HEADER include/spdk/nvme_intel.h 00:07:33.198 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:33.198 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:33.198 TEST_HEADER include/spdk/nvme_spec.h 00:07:33.199 TEST_HEADER include/spdk/nvme_zns.h 00:07:33.199 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:33.199 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:33.199 TEST_HEADER include/spdk/nvmf.h 00:07:33.199 TEST_HEADER include/spdk/nvmf_spec.h 00:07:33.199 TEST_HEADER include/spdk/nvmf_transport.h 00:07:33.199 TEST_HEADER include/spdk/opal.h 00:07:33.199 TEST_HEADER include/spdk/opal_spec.h 00:07:33.199 TEST_HEADER include/spdk/pci_ids.h 00:07:33.199 TEST_HEADER include/spdk/pipe.h 00:07:33.199 TEST_HEADER include/spdk/queue.h 00:07:33.199 TEST_HEADER include/spdk/reduce.h 00:07:33.199 TEST_HEADER include/spdk/rpc.h 00:07:33.199 TEST_HEADER include/spdk/scheduler.h 00:07:33.199 TEST_HEADER include/spdk/scsi.h 00:07:33.199 TEST_HEADER include/spdk/scsi_spec.h 00:07:33.199 TEST_HEADER include/spdk/sock.h 00:07:33.199 TEST_HEADER include/spdk/stdinc.h 00:07:33.199 TEST_HEADER include/spdk/string.h 00:07:33.199 CC test/app/jsoncat/jsoncat.o 00:07:33.199 TEST_HEADER include/spdk/thread.h 00:07:33.199 TEST_HEADER include/spdk/trace.h 00:07:33.199 TEST_HEADER include/spdk/trace_parser.h 00:07:33.199 TEST_HEADER include/spdk/tree.h 00:07:33.199 TEST_HEADER include/spdk/ublk.h 00:07:33.199 CC test/env/mem_callbacks/mem_callbacks.o 00:07:33.199 TEST_HEADER include/spdk/util.h 00:07:33.199 TEST_HEADER include/spdk/uuid.h 00:07:33.199 TEST_HEADER include/spdk/version.h 00:07:33.199 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:33.199 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:33.199 TEST_HEADER include/spdk/vhost.h 00:07:33.199 TEST_HEADER include/spdk/vmd.h 00:07:33.199 TEST_HEADER include/spdk/xor.h 00:07:33.199 TEST_HEADER include/spdk/zipf.h 00:07:33.199 CXX test/cpp_headers/accel.o 00:07:33.456 CC test/app/stub/stub.o 00:07:33.456 LINK jsoncat 00:07:33.456 CXX test/cpp_headers/accel_module.o 00:07:33.456 CC test/event/event_perf/event_perf.o 00:07:33.716 LINK thread 00:07:33.716 LINK stub 00:07:33.716 CXX test/cpp_headers/assert.o 00:07:33.716 LINK nvme_fuzz 00:07:33.716 CC test/event/reactor/reactor.o 00:07:33.716 LINK event_perf 00:07:33.975 CXX test/cpp_headers/barrier.o 00:07:33.975 LINK vhost_fuzz 00:07:33.975 LINK mem_callbacks 00:07:33.975 LINK reactor 00:07:33.975 CC test/event/reactor_perf/reactor_perf.o 00:07:34.234 CXX test/cpp_headers/base64.o 00:07:34.234 CC test/event/app_repeat/app_repeat.o 00:07:34.234 CC test/env/vtophys/vtophys.o 00:07:34.234 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:34.234 LINK reactor_perf 00:07:34.234 CC app/vhost/vhost.o 00:07:34.234 CC examples/sock/hello_world/hello_sock.o 00:07:34.493 CC test/event/scheduler/scheduler.o 00:07:34.493 CXX test/cpp_headers/bdev.o 00:07:34.493 LINK spdk_top 00:07:34.493 LINK env_dpdk_post_init 00:07:34.493 CXX test/cpp_headers/bdev_module.o 00:07:34.493 LINK vtophys 00:07:34.493 LINK app_repeat 00:07:34.493 CXX test/cpp_headers/bdev_zone.o 00:07:34.753 LINK vhost 00:07:34.753 CXX test/cpp_headers/bit_array.o 00:07:34.753 LINK scheduler 00:07:34.753 LINK hello_sock 00:07:34.753 CC test/env/memory/memory_ut.o 00:07:35.012 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:35.012 CXX test/cpp_headers/bit_pool.o 00:07:35.012 CC test/env/pci/pci_ut.o 00:07:35.012 CC examples/accel/perf/accel_perf.o 00:07:35.293 CC examples/blob/hello_world/hello_blob.o 00:07:35.293 CC test/nvme/aer/aer.o 00:07:35.293 CXX test/cpp_headers/blob_bdev.o 00:07:35.293 CC app/spdk_dd/spdk_dd.o 00:07:35.293 LINK hello_fsdev 00:07:35.551 CC app/fio/nvme/fio_plugin.o 00:07:35.551 LINK pci_ut 00:07:35.551 LINK hello_blob 00:07:35.551 CXX test/cpp_headers/blobfs_bdev.o 00:07:35.551 CXX test/cpp_headers/blobfs.o 00:07:35.811 LINK aer 00:07:35.811 CC test/nvme/reset/reset.o 00:07:36.070 LINK accel_perf 00:07:36.070 CXX test/cpp_headers/blob.o 00:07:36.070 CC test/nvme/sgl/sgl.o 00:07:36.070 CC test/nvme/e2edp/nvme_dp.o 00:07:36.070 LINK iscsi_fuzz 00:07:36.070 CC examples/blob/cli/blobcli.o 00:07:36.070 LINK spdk_dd 00:07:36.330 LINK spdk_nvme 00:07:36.330 CXX test/cpp_headers/conf.o 00:07:36.330 LINK reset 00:07:36.330 CC test/nvme/overhead/overhead.o 00:07:36.330 LINK nvme_dp 00:07:36.330 CXX test/cpp_headers/config.o 00:07:36.588 CXX test/cpp_headers/cpuset.o 00:07:36.588 CC app/fio/bdev/fio_plugin.o 00:07:36.588 LINK sgl 00:07:36.588 CC test/rpc_client/rpc_client_test.o 00:07:36.588 CC test/nvme/err_injection/err_injection.o 00:07:36.588 CC test/nvme/startup/startup.o 00:07:36.914 LINK overhead 00:07:36.914 CXX test/cpp_headers/crc16.o 00:07:36.914 LINK memory_ut 00:07:36.914 CC examples/nvme/hello_world/hello_world.o 00:07:36.914 CXX test/cpp_headers/crc32.o 00:07:36.914 LINK err_injection 00:07:36.914 LINK rpc_client_test 00:07:37.180 LINK startup 00:07:37.180 CXX test/cpp_headers/crc64.o 00:07:37.180 LINK blobcli 00:07:37.180 CC examples/nvme/reconnect/reconnect.o 00:07:37.180 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:37.439 LINK hello_world 00:07:37.439 CXX test/cpp_headers/dif.o 00:07:37.439 CC examples/nvme/arbitration/arbitration.o 00:07:37.439 CC test/nvme/reserve/reserve.o 00:07:37.439 CC examples/bdev/hello_world/hello_bdev.o 00:07:37.439 CC test/accel/dif/dif.o 00:07:37.698 LINK spdk_bdev 00:07:37.698 CXX test/cpp_headers/dma.o 00:07:37.698 LINK reserve 00:07:37.698 CC examples/bdev/bdevperf/bdevperf.o 00:07:37.698 CC examples/nvme/hotplug/hotplug.o 00:07:37.698 CXX test/cpp_headers/endian.o 00:07:37.957 LINK hello_bdev 00:07:37.957 LINK arbitration 00:07:37.957 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:37.957 LINK reconnect 00:07:37.957 CXX test/cpp_headers/env_dpdk.o 00:07:38.215 CC test/nvme/simple_copy/simple_copy.o 00:07:38.215 CC examples/nvme/abort/abort.o 00:07:38.215 CXX test/cpp_headers/env.o 00:07:38.215 LINK nvme_manage 00:07:38.215 LINK hotplug 00:07:38.215 LINK cmb_copy 00:07:38.215 CC test/blobfs/mkfs/mkfs.o 00:07:38.475 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:38.475 CXX test/cpp_headers/event.o 00:07:38.475 LINK simple_copy 00:07:38.475 CXX test/cpp_headers/fd_group.o 00:07:38.475 LINK mkfs 00:07:38.733 CXX test/cpp_headers/fd.o 00:07:38.733 CC test/nvme/connect_stress/connect_stress.o 00:07:38.733 LINK pmr_persistence 00:07:38.733 CXX test/cpp_headers/file.o 00:07:38.733 LINK abort 00:07:38.733 CC test/nvme/boot_partition/boot_partition.o 00:07:38.733 CC test/nvme/compliance/nvme_compliance.o 00:07:38.733 LINK dif 00:07:38.993 LINK connect_stress 00:07:38.993 CXX test/cpp_headers/fsdev.o 00:07:38.993 CC test/nvme/fused_ordering/fused_ordering.o 00:07:38.993 LINK bdevperf 00:07:38.993 LINK boot_partition 00:07:38.993 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:38.993 CC test/nvme/fdp/fdp.o 00:07:39.253 CXX test/cpp_headers/fsdev_module.o 00:07:39.253 CXX test/cpp_headers/ftl.o 00:07:39.253 CC test/lvol/esnap/esnap.o 00:07:39.253 CC test/nvme/cuse/cuse.o 00:07:39.253 LINK fused_ordering 00:07:39.253 LINK doorbell_aers 00:07:39.253 LINK nvme_compliance 00:07:39.253 CC test/bdev/bdevio/bdevio.o 00:07:39.253 CXX test/cpp_headers/fuse_dispatcher.o 00:07:39.512 CXX test/cpp_headers/gpt_spec.o 00:07:39.512 CXX test/cpp_headers/hexlify.o 00:07:39.512 CXX test/cpp_headers/histogram_data.o 00:07:39.512 CXX test/cpp_headers/idxd.o 00:07:39.512 LINK fdp 00:07:39.512 CC examples/nvmf/nvmf/nvmf.o 00:07:39.512 CXX test/cpp_headers/idxd_spec.o 00:07:39.512 CXX test/cpp_headers/init.o 00:07:39.512 CXX test/cpp_headers/ioat.o 00:07:39.512 CXX test/cpp_headers/ioat_spec.o 00:07:39.512 CXX test/cpp_headers/iscsi_spec.o 00:07:39.512 CXX test/cpp_headers/json.o 00:07:39.771 CXX test/cpp_headers/jsonrpc.o 00:07:39.771 CXX test/cpp_headers/keyring.o 00:07:39.771 CXX test/cpp_headers/keyring_module.o 00:07:39.771 CXX test/cpp_headers/likely.o 00:07:39.771 CXX test/cpp_headers/log.o 00:07:39.771 LINK bdevio 00:07:39.771 CXX test/cpp_headers/lvol.o 00:07:39.771 LINK nvmf 00:07:40.030 CXX test/cpp_headers/md5.o 00:07:40.030 CXX test/cpp_headers/memory.o 00:07:40.030 CXX test/cpp_headers/mmio.o 00:07:40.030 CXX test/cpp_headers/nbd.o 00:07:40.030 CXX test/cpp_headers/net.o 00:07:40.030 CXX test/cpp_headers/notify.o 00:07:40.030 CXX test/cpp_headers/nvme.o 00:07:40.030 CXX test/cpp_headers/nvme_intel.o 00:07:40.030 CXX test/cpp_headers/nvme_ocssd.o 00:07:40.030 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:40.030 CXX test/cpp_headers/nvme_spec.o 00:07:40.288 CXX test/cpp_headers/nvme_zns.o 00:07:40.288 CXX test/cpp_headers/nvmf_cmd.o 00:07:40.288 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:40.288 CXX test/cpp_headers/nvmf.o 00:07:40.288 CXX test/cpp_headers/nvmf_spec.o 00:07:40.288 CXX test/cpp_headers/nvmf_transport.o 00:07:40.288 CXX test/cpp_headers/opal.o 00:07:40.288 CXX test/cpp_headers/opal_spec.o 00:07:40.288 CXX test/cpp_headers/pci_ids.o 00:07:40.547 CXX test/cpp_headers/pipe.o 00:07:40.547 CXX test/cpp_headers/queue.o 00:07:40.547 CXX test/cpp_headers/reduce.o 00:07:40.547 CXX test/cpp_headers/rpc.o 00:07:40.547 CXX test/cpp_headers/scheduler.o 00:07:40.547 CXX test/cpp_headers/scsi.o 00:07:40.547 CXX test/cpp_headers/scsi_spec.o 00:07:40.547 CXX test/cpp_headers/sock.o 00:07:40.547 CXX test/cpp_headers/stdinc.o 00:07:40.547 CXX test/cpp_headers/string.o 00:07:40.547 CXX test/cpp_headers/thread.o 00:07:40.547 CXX test/cpp_headers/trace.o 00:07:40.547 CXX test/cpp_headers/trace_parser.o 00:07:40.807 CXX test/cpp_headers/tree.o 00:07:40.807 CXX test/cpp_headers/ublk.o 00:07:40.807 CXX test/cpp_headers/util.o 00:07:40.807 CXX test/cpp_headers/uuid.o 00:07:40.807 CXX test/cpp_headers/version.o 00:07:40.807 CXX test/cpp_headers/vfio_user_pci.o 00:07:40.807 CXX test/cpp_headers/vfio_user_spec.o 00:07:40.807 CXX test/cpp_headers/vhost.o 00:07:40.807 CXX test/cpp_headers/vmd.o 00:07:40.807 CXX test/cpp_headers/xor.o 00:07:40.807 LINK cuse 00:07:40.807 CXX test/cpp_headers/zipf.o 00:07:47.454 LINK esnap 00:07:47.454 00:07:47.454 real 2m4.195s 00:07:47.454 user 11m53.749s 00:07:47.454 sys 2m9.666s 00:07:47.454 10:36:08 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:47.454 10:36:08 make -- common/autotest_common.sh@10 -- $ set +x 00:07:47.454 ************************************ 00:07:47.454 END TEST make 00:07:47.454 ************************************ 00:07:47.454 10:36:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:47.454 10:36:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:47.454 10:36:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:47.454 10:36:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:47.454 10:36:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:47.454 10:36:08 -- pm/common@44 -- $ pid=5242 00:07:47.454 10:36:08 -- pm/common@50 -- $ kill -TERM 5242 00:07:47.454 10:36:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:47.454 10:36:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:47.454 10:36:08 -- pm/common@44 -- $ pid=5243 00:07:47.454 10:36:08 -- pm/common@50 -- $ kill -TERM 5243 00:07:47.454 10:36:08 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:47.454 10:36:08 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:47.454 10:36:08 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:47.454 10:36:08 -- common/autotest_common.sh@1691 -- # lcov --version 00:07:47.454 10:36:08 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:47.454 10:36:08 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:47.454 10:36:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.454 10:36:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.454 10:36:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.454 10:36:08 -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.454 10:36:08 -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.454 10:36:08 -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.454 10:36:08 -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.454 10:36:08 -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.454 10:36:08 -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.454 10:36:08 -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.454 10:36:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.454 10:36:08 -- scripts/common.sh@344 -- # case "$op" in 00:07:47.454 10:36:08 -- scripts/common.sh@345 -- # : 1 00:07:47.454 10:36:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.454 10:36:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.454 10:36:08 -- scripts/common.sh@365 -- # decimal 1 00:07:47.454 10:36:08 -- scripts/common.sh@353 -- # local d=1 00:07:47.454 10:36:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.454 10:36:08 -- scripts/common.sh@355 -- # echo 1 00:07:47.454 10:36:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.454 10:36:08 -- scripts/common.sh@366 -- # decimal 2 00:07:47.454 10:36:08 -- scripts/common.sh@353 -- # local d=2 00:07:47.454 10:36:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.454 10:36:08 -- scripts/common.sh@355 -- # echo 2 00:07:47.454 10:36:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.454 10:36:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.454 10:36:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.454 10:36:08 -- scripts/common.sh@368 -- # return 0 00:07:47.454 10:36:08 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.454 10:36:08 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:47.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.454 --rc genhtml_branch_coverage=1 00:07:47.454 --rc genhtml_function_coverage=1 00:07:47.454 --rc genhtml_legend=1 00:07:47.454 --rc geninfo_all_blocks=1 00:07:47.454 --rc geninfo_unexecuted_blocks=1 00:07:47.454 00:07:47.454 ' 00:07:47.454 10:36:08 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:47.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.454 --rc genhtml_branch_coverage=1 00:07:47.454 --rc genhtml_function_coverage=1 00:07:47.454 --rc genhtml_legend=1 00:07:47.454 --rc geninfo_all_blocks=1 00:07:47.454 --rc geninfo_unexecuted_blocks=1 00:07:47.454 00:07:47.454 ' 00:07:47.454 10:36:08 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:47.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.454 --rc genhtml_branch_coverage=1 00:07:47.454 --rc genhtml_function_coverage=1 00:07:47.454 --rc genhtml_legend=1 00:07:47.454 --rc geninfo_all_blocks=1 00:07:47.454 --rc geninfo_unexecuted_blocks=1 00:07:47.454 00:07:47.454 ' 00:07:47.454 10:36:08 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:47.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.454 --rc genhtml_branch_coverage=1 00:07:47.454 --rc genhtml_function_coverage=1 00:07:47.454 --rc genhtml_legend=1 00:07:47.454 --rc geninfo_all_blocks=1 00:07:47.454 --rc geninfo_unexecuted_blocks=1 00:07:47.454 00:07:47.454 ' 00:07:47.454 10:36:08 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:47.454 10:36:08 -- nvmf/common.sh@7 -- # uname -s 00:07:47.454 10:36:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.454 10:36:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.454 10:36:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.455 10:36:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.455 10:36:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.455 10:36:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.455 10:36:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.455 10:36:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.455 10:36:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.455 10:36:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.455 10:36:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9fa3128a-707e-46f0-80ce-82e26fbba9c2 00:07:47.455 10:36:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=9fa3128a-707e-46f0-80ce-82e26fbba9c2 00:07:47.455 10:36:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.455 10:36:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.455 10:36:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:47.455 10:36:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.455 10:36:08 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.455 10:36:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.455 10:36:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.455 10:36:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.455 10:36:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.455 10:36:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.455 10:36:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.455 10:36:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.455 10:36:08 -- paths/export.sh@5 -- # export PATH 00:07:47.455 10:36:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.455 10:36:08 -- nvmf/common.sh@51 -- # : 0 00:07:47.455 10:36:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.455 10:36:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.455 10:36:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.455 10:36:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.455 10:36:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.455 10:36:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.455 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.455 10:36:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.455 10:36:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.455 10:36:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.455 10:36:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:47.455 10:36:08 -- spdk/autotest.sh@32 -- # uname -s 00:07:47.455 10:36:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:47.455 10:36:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:47.455 10:36:08 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:47.455 10:36:08 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:47.455 10:36:08 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:47.455 10:36:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:47.455 10:36:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:47.455 10:36:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:47.455 10:36:08 -- spdk/autotest.sh@48 -- # udevadm_pid=54573 00:07:47.455 10:36:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:47.455 10:36:08 -- pm/common@17 -- # local monitor 00:07:47.455 10:36:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:47.455 10:36:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:47.455 10:36:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:47.455 10:36:08 -- pm/common@25 -- # sleep 1 00:07:47.455 10:36:08 -- pm/common@21 -- # date +%s 00:07:47.455 10:36:08 -- pm/common@21 -- # date +%s 00:07:47.455 10:36:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730284568 00:07:47.455 10:36:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730284568 00:07:47.455 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730284568_collect-cpu-load.pm.log 00:07:47.455 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730284568_collect-vmstat.pm.log 00:07:48.390 10:36:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:48.390 10:36:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:48.390 10:36:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.390 10:36:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.390 10:36:09 -- spdk/autotest.sh@59 -- # create_test_list 00:07:48.390 10:36:09 -- common/autotest_common.sh@750 -- # xtrace_disable 00:07:48.390 10:36:09 -- common/autotest_common.sh@10 -- # set +x 00:07:48.390 10:36:09 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:48.390 10:36:09 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:48.390 10:36:09 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:48.390 10:36:09 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:48.390 10:36:09 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:48.390 10:36:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:48.390 10:36:09 -- common/autotest_common.sh@1455 -- # uname 00:07:48.390 10:36:09 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:07:48.390 10:36:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:48.390 10:36:09 -- common/autotest_common.sh@1475 -- # uname 00:07:48.390 10:36:09 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:07:48.390 10:36:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:48.390 10:36:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:48.648 lcov: LCOV version 1.15 00:07:48.648 10:36:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:06.736 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:06.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:24.824 10:36:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:24.824 10:36:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:24.824 10:36:43 -- common/autotest_common.sh@10 -- # set +x 00:08:24.824 10:36:43 -- spdk/autotest.sh@78 -- # rm -f 00:08:24.824 10:36:43 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:24.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:24.824 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:24.824 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:24.824 10:36:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:24.824 10:36:44 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:24.824 10:36:44 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:24.824 10:36:44 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:24.824 10:36:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:24.824 10:36:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:24.824 10:36:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:24.824 10:36:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:24.824 10:36:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:24.824 10:36:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:24.824 10:36:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:24.824 10:36:44 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:24.824 10:36:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:24.824 10:36:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:24.824 10:36:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:24.824 10:36:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:08:24.824 10:36:44 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:08:24.824 10:36:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:24.824 10:36:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:24.824 10:36:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:24.824 10:36:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:08:24.824 10:36:44 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:08:24.824 10:36:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:24.824 10:36:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:24.824 10:36:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:24.824 10:36:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:24.824 10:36:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:24.824 10:36:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:24.824 10:36:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:24.824 10:36:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:24.824 No valid GPT data, bailing 00:08:24.824 10:36:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:24.824 10:36:44 -- scripts/common.sh@394 -- # pt= 00:08:24.824 10:36:44 -- scripts/common.sh@395 -- # return 1 00:08:24.824 10:36:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:24.824 1+0 records in 00:08:24.824 1+0 records out 00:08:24.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387899 s, 270 MB/s 00:08:24.824 10:36:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:24.824 10:36:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:24.824 10:36:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:24.824 10:36:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:24.824 10:36:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:24.824 No valid GPT data, bailing 00:08:24.824 10:36:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:24.824 10:36:44 -- scripts/common.sh@394 -- # pt= 00:08:24.824 10:36:44 -- scripts/common.sh@395 -- # return 1 00:08:24.824 10:36:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:24.824 1+0 records in 00:08:24.824 1+0 records out 00:08:24.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469026 s, 224 MB/s 00:08:24.824 10:36:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:24.824 10:36:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:24.824 10:36:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:24.824 10:36:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:24.824 10:36:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:24.824 No valid GPT data, bailing 00:08:24.824 10:36:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:24.824 10:36:44 -- scripts/common.sh@394 -- # pt= 00:08:24.824 10:36:44 -- scripts/common.sh@395 -- # return 1 00:08:24.824 10:36:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:24.824 1+0 records in 00:08:24.824 1+0 records out 00:08:24.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496008 s, 211 MB/s 00:08:24.824 10:36:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:24.824 10:36:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:24.824 10:36:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:24.824 10:36:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:24.824 10:36:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:24.824 No valid GPT data, bailing 00:08:24.824 10:36:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:24.824 10:36:44 -- scripts/common.sh@394 -- # pt= 00:08:24.824 10:36:44 -- scripts/common.sh@395 -- # return 1 00:08:24.824 10:36:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:24.824 1+0 records in 00:08:24.824 1+0 records out 00:08:24.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472783 s, 222 MB/s 00:08:24.824 10:36:44 -- spdk/autotest.sh@105 -- # sync 00:08:24.824 10:36:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:24.824 10:36:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:24.824 10:36:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:25.422 10:36:46 -- spdk/autotest.sh@111 -- # uname -s 00:08:25.422 10:36:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:25.422 10:36:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:25.422 10:36:46 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:25.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:25.992 Hugepages 00:08:25.992 node hugesize free / total 00:08:25.992 node0 1048576kB 0 / 0 00:08:25.992 node0 2048kB 0 / 0 00:08:25.992 00:08:25.992 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:25.992 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:26.252 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:26.252 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:26.252 10:36:47 -- spdk/autotest.sh@117 -- # uname -s 00:08:26.252 10:36:47 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:26.252 10:36:47 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:26.252 10:36:47 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:26.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:27.076 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:27.076 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:27.076 10:36:48 -- common/autotest_common.sh@1515 -- # sleep 1 00:08:28.014 10:36:49 -- common/autotest_common.sh@1516 -- # bdfs=() 00:08:28.014 10:36:49 -- common/autotest_common.sh@1516 -- # local bdfs 00:08:28.014 10:36:49 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:08:28.014 10:36:49 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:08:28.014 10:36:49 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:28.014 10:36:49 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:28.014 10:36:49 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:28.014 10:36:49 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:28.014 10:36:49 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:28.273 10:36:49 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:08:28.273 10:36:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:28.273 10:36:49 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:28.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:28.532 Waiting for block devices as requested 00:08:28.532 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.532 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.790 10:36:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:28.790 10:36:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:28.790 10:36:50 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:08:28.790 10:36:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:28.790 10:36:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:28.790 10:36:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:28.790 10:36:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:28.790 10:36:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:08:28.790 10:36:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:08:28.790 10:36:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:08:28.790 10:36:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:08:28.790 10:36:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:28.790 10:36:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:28.790 10:36:50 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:28.790 10:36:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:28.790 10:36:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:28.790 10:36:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:08:28.790 10:36:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:28.790 10:36:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:28.790 10:36:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:28.790 10:36:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:28.790 10:36:50 -- common/autotest_common.sh@1541 -- # continue 00:08:28.790 10:36:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:08:28.790 10:36:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:28.790 10:36:50 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:08:28.790 10:36:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:28.790 10:36:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:28.790 10:36:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:28.790 10:36:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:28.790 10:36:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:08:28.790 10:36:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:08:28.790 10:36:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:08:28.790 10:36:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:08:28.790 10:36:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:08:28.790 10:36:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:08:28.790 10:36:50 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:08:28.790 10:36:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:08:28.790 10:36:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:08:28.790 10:36:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:08:28.790 10:36:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:08:28.790 10:36:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:08:28.790 10:36:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:08:28.790 10:36:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:08:28.790 10:36:50 -- common/autotest_common.sh@1541 -- # continue 00:08:28.790 10:36:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:28.790 10:36:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:28.790 10:36:50 -- common/autotest_common.sh@10 -- # set +x 00:08:28.790 10:36:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:28.790 10:36:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.790 10:36:50 -- common/autotest_common.sh@10 -- # set +x 00:08:28.790 10:36:50 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:29.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:29.727 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:29.727 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:29.727 10:36:51 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:29.727 10:36:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.727 10:36:51 -- common/autotest_common.sh@10 -- # set +x 00:08:29.727 10:36:51 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:29.727 10:36:51 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:29.727 10:36:51 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:29.727 10:36:51 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:29.727 10:36:51 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:29.727 10:36:51 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:29.727 10:36:51 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:29.727 10:36:51 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:29.727 10:36:51 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:29.727 10:36:51 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:29.727 10:36:51 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:29.727 10:36:51 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:29.727 10:36:51 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:29.727 10:36:51 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:08:29.727 10:36:51 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:29.727 10:36:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:29.727 10:36:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:29.727 10:36:51 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:29.727 10:36:51 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:29.727 10:36:51 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:29.727 10:36:51 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:29.727 10:36:51 -- common/autotest_common.sh@1564 -- # device=0x0010 00:08:29.727 10:36:51 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:29.727 10:36:51 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:08:29.727 10:36:51 -- common/autotest_common.sh@1570 -- # return 0 00:08:29.727 10:36:51 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:08:29.727 10:36:51 -- common/autotest_common.sh@1578 -- # return 0 00:08:29.727 10:36:51 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:29.727 10:36:51 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:29.727 10:36:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:29.727 10:36:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:29.727 10:36:51 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:29.727 10:36:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.727 10:36:51 -- common/autotest_common.sh@10 -- # set +x 00:08:29.727 10:36:51 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:29.727 10:36:51 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:29.727 10:36:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:29.727 10:36:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.727 10:36:51 -- common/autotest_common.sh@10 -- # set +x 00:08:29.727 ************************************ 00:08:29.727 START TEST env 00:08:29.727 ************************************ 00:08:29.727 10:36:51 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:29.986 * Looking for test storage... 00:08:29.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:29.986 10:36:51 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:29.986 10:36:51 env -- common/autotest_common.sh@1691 -- # lcov --version 00:08:29.986 10:36:51 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:29.986 10:36:51 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:29.986 10:36:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.986 10:36:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.986 10:36:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.986 10:36:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.986 10:36:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.986 10:36:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.986 10:36:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.986 10:36:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.986 10:36:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.986 10:36:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.986 10:36:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.986 10:36:51 env -- scripts/common.sh@344 -- # case "$op" in 00:08:29.986 10:36:51 env -- scripts/common.sh@345 -- # : 1 00:08:29.986 10:36:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.986 10:36:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.986 10:36:51 env -- scripts/common.sh@365 -- # decimal 1 00:08:29.986 10:36:51 env -- scripts/common.sh@353 -- # local d=1 00:08:29.986 10:36:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.986 10:36:51 env -- scripts/common.sh@355 -- # echo 1 00:08:29.986 10:36:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.986 10:36:51 env -- scripts/common.sh@366 -- # decimal 2 00:08:29.986 10:36:51 env -- scripts/common.sh@353 -- # local d=2 00:08:29.986 10:36:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.986 10:36:51 env -- scripts/common.sh@355 -- # echo 2 00:08:29.986 10:36:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.986 10:36:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.986 10:36:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.986 10:36:51 env -- scripts/common.sh@368 -- # return 0 00:08:29.987 10:36:51 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.987 10:36:51 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:29.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.987 --rc genhtml_branch_coverage=1 00:08:29.987 --rc genhtml_function_coverage=1 00:08:29.987 --rc genhtml_legend=1 00:08:29.987 --rc geninfo_all_blocks=1 00:08:29.987 --rc geninfo_unexecuted_blocks=1 00:08:29.987 00:08:29.987 ' 00:08:29.987 10:36:51 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:29.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.987 --rc genhtml_branch_coverage=1 00:08:29.987 --rc genhtml_function_coverage=1 00:08:29.987 --rc genhtml_legend=1 00:08:29.987 --rc geninfo_all_blocks=1 00:08:29.987 --rc geninfo_unexecuted_blocks=1 00:08:29.987 00:08:29.987 ' 00:08:29.987 10:36:51 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:29.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.987 --rc genhtml_branch_coverage=1 00:08:29.987 --rc genhtml_function_coverage=1 00:08:29.987 --rc genhtml_legend=1 00:08:29.987 --rc geninfo_all_blocks=1 00:08:29.987 --rc geninfo_unexecuted_blocks=1 00:08:29.987 00:08:29.987 ' 00:08:29.987 10:36:51 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:29.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.987 --rc genhtml_branch_coverage=1 00:08:29.987 --rc genhtml_function_coverage=1 00:08:29.987 --rc genhtml_legend=1 00:08:29.987 --rc geninfo_all_blocks=1 00:08:29.987 --rc geninfo_unexecuted_blocks=1 00:08:29.987 00:08:29.987 ' 00:08:29.987 10:36:51 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:29.987 10:36:51 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:29.987 10:36:51 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.987 10:36:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:29.987 ************************************ 00:08:29.987 START TEST env_memory 00:08:29.987 ************************************ 00:08:29.987 10:36:51 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:29.987 00:08:29.987 00:08:29.987 CUnit - A unit testing framework for C - Version 2.1-3 00:08:29.987 http://cunit.sourceforge.net/ 00:08:29.987 00:08:29.987 00:08:29.987 Suite: memory 00:08:30.246 Test: alloc and free memory map ...[2024-10-30 10:36:51.457870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:30.246 passed 00:08:30.246 Test: mem map translation ...[2024-10-30 10:36:51.518992] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:30.246 [2024-10-30 10:36:51.519235] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:30.246 [2024-10-30 10:36:51.519499] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:30.246 [2024-10-30 10:36:51.519698] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:30.246 passed 00:08:30.246 Test: mem map registration ...[2024-10-30 10:36:51.618370] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:30.246 [2024-10-30 10:36:51.618651] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:30.246 passed 00:08:30.530 Test: mem map adjacent registrations ...passed 00:08:30.530 00:08:30.530 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.530 suites 1 1 n/a 0 0 00:08:30.530 tests 4 4 4 0 0 00:08:30.530 asserts 152 152 152 0 n/a 00:08:30.530 00:08:30.530 Elapsed time = 0.344 seconds 00:08:30.530 00:08:30.530 real 0m0.383s 00:08:30.530 user 0m0.355s 00:08:30.530 sys 0m0.020s 00:08:30.530 ************************************ 00:08:30.530 END TEST env_memory 00:08:30.530 ************************************ 00:08:30.530 10:36:51 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.530 10:36:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 10:36:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:30.530 10:36:51 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:30.530 10:36:51 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.530 10:36:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:30.530 ************************************ 00:08:30.530 START TEST env_vtophys 00:08:30.530 ************************************ 00:08:30.530 10:36:51 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:30.530 EAL: lib.eal log level changed from notice to debug 00:08:30.530 EAL: Detected lcore 0 as core 0 on socket 0 00:08:30.530 EAL: Detected lcore 1 as core 0 on socket 0 00:08:30.530 EAL: Detected lcore 2 as core 0 on socket 0 00:08:30.530 EAL: Detected lcore 3 as core 0 on socket 0 00:08:30.530 EAL: Detected lcore 4 as core 0 on socket 0 00:08:30.530 EAL: Detected lcore 5 as core 0 on socket 0 00:08:30.530 EAL: Detected lcore 6 as core 0 on socket 0 00:08:30.530 EAL: Detected lcore 7 as core 0 on socket 0 00:08:30.530 EAL: Detected lcore 8 as core 0 on socket 0 00:08:30.530 EAL: Detected lcore 9 as core 0 on socket 0 00:08:30.530 EAL: Maximum logical cores by configuration: 128 00:08:30.530 EAL: Detected CPU lcores: 10 00:08:30.530 EAL: Detected NUMA nodes: 1 00:08:30.530 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:30.530 EAL: Detected shared linkage of DPDK 00:08:30.530 EAL: No shared files mode enabled, IPC will be disabled 00:08:30.530 EAL: Selected IOVA mode 'PA' 00:08:30.530 EAL: Probing VFIO support... 00:08:30.530 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:30.530 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:30.530 EAL: Ask a virtual area of 0x2e000 bytes 00:08:30.530 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:30.530 EAL: Setting up physically contiguous memory... 00:08:30.530 EAL: Setting maximum number of open files to 524288 00:08:30.530 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:30.530 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:30.530 EAL: Ask a virtual area of 0x61000 bytes 00:08:30.530 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:30.530 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:30.530 EAL: Ask a virtual area of 0x400000000 bytes 00:08:30.530 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:30.530 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:30.530 EAL: Ask a virtual area of 0x61000 bytes 00:08:30.530 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:30.530 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:30.530 EAL: Ask a virtual area of 0x400000000 bytes 00:08:30.530 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:30.530 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:30.530 EAL: Ask a virtual area of 0x61000 bytes 00:08:30.530 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:30.530 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:30.530 EAL: Ask a virtual area of 0x400000000 bytes 00:08:30.530 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:30.530 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:30.530 EAL: Ask a virtual area of 0x61000 bytes 00:08:30.530 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:30.530 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:30.530 EAL: Ask a virtual area of 0x400000000 bytes 00:08:30.530 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:30.530 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:30.530 EAL: Hugepages will be freed exactly as allocated. 00:08:30.530 EAL: No shared files mode enabled, IPC is disabled 00:08:30.530 EAL: No shared files mode enabled, IPC is disabled 00:08:30.788 EAL: TSC frequency is ~2200000 KHz 00:08:30.788 EAL: Main lcore 0 is ready (tid=7fc56e4f4a40;cpuset=[0]) 00:08:30.788 EAL: Trying to obtain current memory policy. 00:08:30.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:30.788 EAL: Restoring previous memory policy: 0 00:08:30.788 EAL: request: mp_malloc_sync 00:08:30.788 EAL: No shared files mode enabled, IPC is disabled 00:08:30.788 EAL: Heap on socket 0 was expanded by 2MB 00:08:30.788 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:30.788 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:30.788 EAL: Mem event callback 'spdk:(nil)' registered 00:08:30.788 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:30.788 00:08:30.788 00:08:30.788 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.788 http://cunit.sourceforge.net/ 00:08:30.788 00:08:30.788 00:08:30.788 Suite: components_suite 00:08:31.354 Test: vtophys_malloc_test ...passed 00:08:31.354 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:31.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.354 EAL: Restoring previous memory policy: 4 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was expanded by 4MB 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was shrunk by 4MB 00:08:31.354 EAL: Trying to obtain current memory policy. 00:08:31.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.354 EAL: Restoring previous memory policy: 4 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was expanded by 6MB 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was shrunk by 6MB 00:08:31.354 EAL: Trying to obtain current memory policy. 00:08:31.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.354 EAL: Restoring previous memory policy: 4 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was expanded by 10MB 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was shrunk by 10MB 00:08:31.354 EAL: Trying to obtain current memory policy. 00:08:31.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.354 EAL: Restoring previous memory policy: 4 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was expanded by 18MB 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was shrunk by 18MB 00:08:31.354 EAL: Trying to obtain current memory policy. 00:08:31.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.354 EAL: Restoring previous memory policy: 4 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was expanded by 34MB 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was shrunk by 34MB 00:08:31.354 EAL: Trying to obtain current memory policy. 00:08:31.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.354 EAL: Restoring previous memory policy: 4 00:08:31.354 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.354 EAL: request: mp_malloc_sync 00:08:31.354 EAL: No shared files mode enabled, IPC is disabled 00:08:31.354 EAL: Heap on socket 0 was expanded by 66MB 00:08:31.613 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.613 EAL: request: mp_malloc_sync 00:08:31.613 EAL: No shared files mode enabled, IPC is disabled 00:08:31.613 EAL: Heap on socket 0 was shrunk by 66MB 00:08:31.613 EAL: Trying to obtain current memory policy. 00:08:31.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:31.613 EAL: Restoring previous memory policy: 4 00:08:31.613 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.613 EAL: request: mp_malloc_sync 00:08:31.613 EAL: No shared files mode enabled, IPC is disabled 00:08:31.613 EAL: Heap on socket 0 was expanded by 130MB 00:08:31.872 EAL: Calling mem event callback 'spdk:(nil)' 00:08:31.872 EAL: request: mp_malloc_sync 00:08:31.872 EAL: No shared files mode enabled, IPC is disabled 00:08:31.872 EAL: Heap on socket 0 was shrunk by 130MB 00:08:32.131 EAL: Trying to obtain current memory policy. 00:08:32.131 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:32.131 EAL: Restoring previous memory policy: 4 00:08:32.131 EAL: Calling mem event callback 'spdk:(nil)' 00:08:32.131 EAL: request: mp_malloc_sync 00:08:32.131 EAL: No shared files mode enabled, IPC is disabled 00:08:32.131 EAL: Heap on socket 0 was expanded by 258MB 00:08:32.698 EAL: Calling mem event callback 'spdk:(nil)' 00:08:32.698 EAL: request: mp_malloc_sync 00:08:32.698 EAL: No shared files mode enabled, IPC is disabled 00:08:32.698 EAL: Heap on socket 0 was shrunk by 258MB 00:08:32.957 EAL: Trying to obtain current memory policy. 00:08:32.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:33.214 EAL: Restoring previous memory policy: 4 00:08:33.214 EAL: Calling mem event callback 'spdk:(nil)' 00:08:33.214 EAL: request: mp_malloc_sync 00:08:33.214 EAL: No shared files mode enabled, IPC is disabled 00:08:33.214 EAL: Heap on socket 0 was expanded by 514MB 00:08:34.150 EAL: Calling mem event callback 'spdk:(nil)' 00:08:34.150 EAL: request: mp_malloc_sync 00:08:34.150 EAL: No shared files mode enabled, IPC is disabled 00:08:34.150 EAL: Heap on socket 0 was shrunk by 514MB 00:08:34.716 EAL: Trying to obtain current memory policy. 00:08:34.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:35.285 EAL: Restoring previous memory policy: 4 00:08:35.285 EAL: Calling mem event callback 'spdk:(nil)' 00:08:35.285 EAL: request: mp_malloc_sync 00:08:35.285 EAL: No shared files mode enabled, IPC is disabled 00:08:35.285 EAL: Heap on socket 0 was expanded by 1026MB 00:08:37.184 EAL: Calling mem event callback 'spdk:(nil)' 00:08:37.184 EAL: request: mp_malloc_sync 00:08:37.184 EAL: No shared files mode enabled, IPC is disabled 00:08:37.184 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:38.559 passed 00:08:38.559 00:08:38.559 Run Summary: Type Total Ran Passed Failed Inactive 00:08:38.559 suites 1 1 n/a 0 0 00:08:38.559 tests 2 2 2 0 0 00:08:38.559 asserts 5705 5705 5705 0 n/a 00:08:38.559 00:08:38.559 Elapsed time = 7.693 seconds 00:08:38.559 EAL: Calling mem event callback 'spdk:(nil)' 00:08:38.559 EAL: request: mp_malloc_sync 00:08:38.559 EAL: No shared files mode enabled, IPC is disabled 00:08:38.559 EAL: Heap on socket 0 was shrunk by 2MB 00:08:38.559 EAL: No shared files mode enabled, IPC is disabled 00:08:38.559 EAL: No shared files mode enabled, IPC is disabled 00:08:38.559 EAL: No shared files mode enabled, IPC is disabled 00:08:38.559 00:08:38.559 real 0m8.054s 00:08:38.559 user 0m6.800s 00:08:38.559 sys 0m1.079s 00:08:38.559 10:36:59 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.559 10:36:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:38.559 ************************************ 00:08:38.559 END TEST env_vtophys 00:08:38.560 ************************************ 00:08:38.560 10:36:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:38.560 10:36:59 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:38.560 10:36:59 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.560 10:36:59 env -- common/autotest_common.sh@10 -- # set +x 00:08:38.560 ************************************ 00:08:38.560 START TEST env_pci 00:08:38.560 ************************************ 00:08:38.560 10:36:59 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:38.560 00:08:38.560 00:08:38.560 CUnit - A unit testing framework for C - Version 2.1-3 00:08:38.560 http://cunit.sourceforge.net/ 00:08:38.560 00:08:38.560 00:08:38.560 Suite: pci 00:08:38.560 Test: pci_hook ...[2024-10-30 10:36:59.957873] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56921 has claimed it 00:08:38.560 passed 00:08:38.560 00:08:38.560 Run Summary: Type Total Ran Passed Failed Inactive 00:08:38.560 suites 1 1 n/a 0 0 00:08:38.560 tests 1 1 1 0 0 00:08:38.560 asserts 25 25 25 0 n/a 00:08:38.560 00:08:38.560 Elapsed time = 0.007 seconds 00:08:38.560 EAL: Cannot find device (10000:00:01.0) 00:08:38.560 EAL: Failed to attach device on primary process 00:08:38.560 00:08:38.560 real 0m0.075s 00:08:38.560 user 0m0.031s 00:08:38.560 sys 0m0.043s 00:08:38.560 10:37:00 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.560 10:37:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:38.560 ************************************ 00:08:38.560 END TEST env_pci 00:08:38.560 ************************************ 00:08:38.818 10:37:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:38.818 10:37:00 env -- env/env.sh@15 -- # uname 00:08:38.818 10:37:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:38.818 10:37:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:38.818 10:37:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:38.818 10:37:00 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:38.818 10:37:00 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.818 10:37:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:38.818 ************************************ 00:08:38.818 START TEST env_dpdk_post_init 00:08:38.818 ************************************ 00:08:38.818 10:37:00 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:38.818 EAL: Detected CPU lcores: 10 00:08:38.818 EAL: Detected NUMA nodes: 1 00:08:38.818 EAL: Detected shared linkage of DPDK 00:08:38.818 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:38.818 EAL: Selected IOVA mode 'PA' 00:08:38.818 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:39.107 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:39.107 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:39.107 Starting DPDK initialization... 00:08:39.107 Starting SPDK post initialization... 00:08:39.107 SPDK NVMe probe 00:08:39.107 Attaching to 0000:00:10.0 00:08:39.107 Attaching to 0000:00:11.0 00:08:39.107 Attached to 0000:00:10.0 00:08:39.107 Attached to 0000:00:11.0 00:08:39.107 Cleaning up... 00:08:39.107 00:08:39.107 real 0m0.298s 00:08:39.107 user 0m0.105s 00:08:39.107 sys 0m0.093s 00:08:39.107 10:37:00 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.107 10:37:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:39.107 ************************************ 00:08:39.107 END TEST env_dpdk_post_init 00:08:39.107 ************************************ 00:08:39.107 10:37:00 env -- env/env.sh@26 -- # uname 00:08:39.107 10:37:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:39.107 10:37:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:39.107 10:37:00 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:39.107 10:37:00 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.107 10:37:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:39.107 ************************************ 00:08:39.107 START TEST env_mem_callbacks 00:08:39.107 ************************************ 00:08:39.107 10:37:00 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:39.107 EAL: Detected CPU lcores: 10 00:08:39.107 EAL: Detected NUMA nodes: 1 00:08:39.107 EAL: Detected shared linkage of DPDK 00:08:39.107 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:39.107 EAL: Selected IOVA mode 'PA' 00:08:39.374 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:39.374 00:08:39.374 00:08:39.374 CUnit - A unit testing framework for C - Version 2.1-3 00:08:39.374 http://cunit.sourceforge.net/ 00:08:39.374 00:08:39.374 00:08:39.374 Suite: memory 00:08:39.374 Test: test ... 00:08:39.374 register 0x200000200000 2097152 00:08:39.374 malloc 3145728 00:08:39.374 register 0x200000400000 4194304 00:08:39.374 buf 0x2000004fffc0 len 3145728 PASSED 00:08:39.374 malloc 64 00:08:39.374 buf 0x2000004ffec0 len 64 PASSED 00:08:39.374 malloc 4194304 00:08:39.374 register 0x200000800000 6291456 00:08:39.374 buf 0x2000009fffc0 len 4194304 PASSED 00:08:39.374 free 0x2000004fffc0 3145728 00:08:39.374 free 0x2000004ffec0 64 00:08:39.374 unregister 0x200000400000 4194304 PASSED 00:08:39.374 free 0x2000009fffc0 4194304 00:08:39.374 unregister 0x200000800000 6291456 PASSED 00:08:39.374 malloc 8388608 00:08:39.374 register 0x200000400000 10485760 00:08:39.374 buf 0x2000005fffc0 len 8388608 PASSED 00:08:39.374 free 0x2000005fffc0 8388608 00:08:39.374 unregister 0x200000400000 10485760 PASSED 00:08:39.374 passed 00:08:39.374 00:08:39.374 Run Summary: Type Total Ran Passed Failed Inactive 00:08:39.374 suites 1 1 n/a 0 0 00:08:39.374 tests 1 1 1 0 0 00:08:39.374 asserts 15 15 15 0 n/a 00:08:39.374 00:08:39.374 Elapsed time = 0.077 seconds 00:08:39.374 ************************************ 00:08:39.374 END TEST env_mem_callbacks 00:08:39.374 ************************************ 00:08:39.374 00:08:39.374 real 0m0.295s 00:08:39.374 user 0m0.112s 00:08:39.374 sys 0m0.078s 00:08:39.374 10:37:00 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.374 10:37:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:39.374 ************************************ 00:08:39.374 END TEST env 00:08:39.374 ************************************ 00:08:39.374 00:08:39.374 real 0m9.573s 00:08:39.374 user 0m7.595s 00:08:39.374 sys 0m1.572s 00:08:39.374 10:37:00 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:39.374 10:37:00 env -- common/autotest_common.sh@10 -- # set +x 00:08:39.374 10:37:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:39.374 10:37:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:39.374 10:37:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:39.374 10:37:00 -- common/autotest_common.sh@10 -- # set +x 00:08:39.374 ************************************ 00:08:39.374 START TEST rpc 00:08:39.374 ************************************ 00:08:39.374 10:37:00 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:39.634 * Looking for test storage... 00:08:39.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.634 10:37:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.634 10:37:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.634 10:37:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.634 10:37:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.634 10:37:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.634 10:37:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.634 10:37:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.634 10:37:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.634 10:37:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.634 10:37:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.634 10:37:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.634 10:37:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:39.634 10:37:00 rpc -- scripts/common.sh@345 -- # : 1 00:08:39.634 10:37:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.634 10:37:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.634 10:37:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:39.634 10:37:00 rpc -- scripts/common.sh@353 -- # local d=1 00:08:39.634 10:37:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.634 10:37:00 rpc -- scripts/common.sh@355 -- # echo 1 00:08:39.634 10:37:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.634 10:37:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:39.634 10:37:00 rpc -- scripts/common.sh@353 -- # local d=2 00:08:39.634 10:37:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.634 10:37:00 rpc -- scripts/common.sh@355 -- # echo 2 00:08:39.634 10:37:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.634 10:37:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.634 10:37:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.634 10:37:00 rpc -- scripts/common.sh@368 -- # return 0 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.634 --rc genhtml_branch_coverage=1 00:08:39.634 --rc genhtml_function_coverage=1 00:08:39.634 --rc genhtml_legend=1 00:08:39.634 --rc geninfo_all_blocks=1 00:08:39.634 --rc geninfo_unexecuted_blocks=1 00:08:39.634 00:08:39.634 ' 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.634 --rc genhtml_branch_coverage=1 00:08:39.634 --rc genhtml_function_coverage=1 00:08:39.634 --rc genhtml_legend=1 00:08:39.634 --rc geninfo_all_blocks=1 00:08:39.634 --rc geninfo_unexecuted_blocks=1 00:08:39.634 00:08:39.634 ' 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.634 --rc genhtml_branch_coverage=1 00:08:39.634 --rc genhtml_function_coverage=1 00:08:39.634 --rc genhtml_legend=1 00:08:39.634 --rc geninfo_all_blocks=1 00:08:39.634 --rc geninfo_unexecuted_blocks=1 00:08:39.634 00:08:39.634 ' 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.634 --rc genhtml_branch_coverage=1 00:08:39.634 --rc genhtml_function_coverage=1 00:08:39.634 --rc genhtml_legend=1 00:08:39.634 --rc geninfo_all_blocks=1 00:08:39.634 --rc geninfo_unexecuted_blocks=1 00:08:39.634 00:08:39.634 ' 00:08:39.634 10:37:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57052 00:08:39.634 10:37:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:39.634 10:37:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:39.634 10:37:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57052 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@833 -- # '[' -z 57052 ']' 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.634 10:37:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.894 [2024-10-30 10:37:01.109576] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:08:39.894 [2024-10-30 10:37:01.110558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57052 ] 00:08:39.894 [2024-10-30 10:37:01.293053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.153 [2024-10-30 10:37:01.454989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:40.153 [2024-10-30 10:37:01.455311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57052' to capture a snapshot of events at runtime. 00:08:40.153 [2024-10-30 10:37:01.455512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.153 [2024-10-30 10:37:01.455665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.153 [2024-10-30 10:37:01.455694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57052 for offline analysis/debug. 00:08:40.153 [2024-10-30 10:37:01.457432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.089 10:37:02 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.089 10:37:02 rpc -- common/autotest_common.sh@866 -- # return 0 00:08:41.089 10:37:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:41.089 10:37:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:41.089 10:37:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:41.089 10:37:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:41.089 10:37:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:41.089 10:37:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.089 10:37:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.089 ************************************ 00:08:41.089 START TEST rpc_integrity 00:08:41.089 ************************************ 00:08:41.089 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:08:41.089 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:41.089 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.089 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.089 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.089 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:41.089 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:41.089 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:41.089 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:41.089 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.090 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:41.090 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.090 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:41.090 { 00:08:41.090 "name": "Malloc0", 00:08:41.090 "aliases": [ 00:08:41.090 "7afe2641-d88e-4919-9777-f3d1cc45ce50" 00:08:41.090 ], 00:08:41.090 "product_name": "Malloc disk", 00:08:41.090 "block_size": 512, 00:08:41.090 "num_blocks": 16384, 00:08:41.090 "uuid": "7afe2641-d88e-4919-9777-f3d1cc45ce50", 00:08:41.090 "assigned_rate_limits": { 00:08:41.090 "rw_ios_per_sec": 0, 00:08:41.090 "rw_mbytes_per_sec": 0, 00:08:41.090 "r_mbytes_per_sec": 0, 00:08:41.090 "w_mbytes_per_sec": 0 00:08:41.090 }, 00:08:41.090 "claimed": false, 00:08:41.090 "zoned": false, 00:08:41.090 "supported_io_types": { 00:08:41.090 "read": true, 00:08:41.090 "write": true, 00:08:41.090 "unmap": true, 00:08:41.090 "flush": true, 00:08:41.090 "reset": true, 00:08:41.090 "nvme_admin": false, 00:08:41.090 "nvme_io": false, 00:08:41.090 "nvme_io_md": false, 00:08:41.090 "write_zeroes": true, 00:08:41.090 "zcopy": true, 00:08:41.090 "get_zone_info": false, 00:08:41.090 "zone_management": false, 00:08:41.090 "zone_append": false, 00:08:41.090 "compare": false, 00:08:41.090 "compare_and_write": false, 00:08:41.090 "abort": true, 00:08:41.090 "seek_hole": false, 00:08:41.090 "seek_data": false, 00:08:41.090 "copy": true, 00:08:41.090 "nvme_iov_md": false 00:08:41.090 }, 00:08:41.090 "memory_domains": [ 00:08:41.090 { 00:08:41.090 "dma_device_id": "system", 00:08:41.090 "dma_device_type": 1 00:08:41.090 }, 00:08:41.090 { 00:08:41.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.090 "dma_device_type": 2 00:08:41.090 } 00:08:41.090 ], 00:08:41.090 "driver_specific": {} 00:08:41.090 } 00:08:41.090 ]' 00:08:41.090 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:41.090 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:41.090 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.090 [2024-10-30 10:37:02.489255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:41.090 [2024-10-30 10:37:02.489579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.090 [2024-10-30 10:37:02.489624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:41.090 [2024-10-30 10:37:02.489649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.090 [2024-10-30 10:37:02.492784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.090 [2024-10-30 10:37:02.493003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:41.090 Passthru0 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.090 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.090 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.090 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:41.090 { 00:08:41.090 "name": "Malloc0", 00:08:41.090 "aliases": [ 00:08:41.090 "7afe2641-d88e-4919-9777-f3d1cc45ce50" 00:08:41.090 ], 00:08:41.090 "product_name": "Malloc disk", 00:08:41.090 "block_size": 512, 00:08:41.090 "num_blocks": 16384, 00:08:41.090 "uuid": "7afe2641-d88e-4919-9777-f3d1cc45ce50", 00:08:41.090 "assigned_rate_limits": { 00:08:41.090 "rw_ios_per_sec": 0, 00:08:41.090 "rw_mbytes_per_sec": 0, 00:08:41.090 "r_mbytes_per_sec": 0, 00:08:41.090 "w_mbytes_per_sec": 0 00:08:41.090 }, 00:08:41.090 "claimed": true, 00:08:41.090 "claim_type": "exclusive_write", 00:08:41.090 "zoned": false, 00:08:41.090 "supported_io_types": { 00:08:41.090 "read": true, 00:08:41.090 "write": true, 00:08:41.090 "unmap": true, 00:08:41.090 "flush": true, 00:08:41.090 "reset": true, 00:08:41.090 "nvme_admin": false, 00:08:41.090 "nvme_io": false, 00:08:41.090 "nvme_io_md": false, 00:08:41.090 "write_zeroes": true, 00:08:41.090 "zcopy": true, 00:08:41.090 "get_zone_info": false, 00:08:41.090 "zone_management": false, 00:08:41.090 "zone_append": false, 00:08:41.090 "compare": false, 00:08:41.090 "compare_and_write": false, 00:08:41.090 "abort": true, 00:08:41.090 "seek_hole": false, 00:08:41.090 "seek_data": false, 00:08:41.090 "copy": true, 00:08:41.090 "nvme_iov_md": false 00:08:41.090 }, 00:08:41.090 "memory_domains": [ 00:08:41.090 { 00:08:41.090 "dma_device_id": "system", 00:08:41.090 "dma_device_type": 1 00:08:41.090 }, 00:08:41.090 { 00:08:41.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.090 "dma_device_type": 2 00:08:41.090 } 00:08:41.090 ], 00:08:41.090 "driver_specific": {} 00:08:41.090 }, 00:08:41.090 { 00:08:41.090 "name": "Passthru0", 00:08:41.090 "aliases": [ 00:08:41.090 "2e4a7e61-0f2f-5c74-8ff2-d19df51b6d45" 00:08:41.090 ], 00:08:41.090 "product_name": "passthru", 00:08:41.090 "block_size": 512, 00:08:41.090 "num_blocks": 16384, 00:08:41.090 "uuid": "2e4a7e61-0f2f-5c74-8ff2-d19df51b6d45", 00:08:41.090 "assigned_rate_limits": { 00:08:41.090 "rw_ios_per_sec": 0, 00:08:41.090 "rw_mbytes_per_sec": 0, 00:08:41.090 "r_mbytes_per_sec": 0, 00:08:41.090 "w_mbytes_per_sec": 0 00:08:41.090 }, 00:08:41.090 "claimed": false, 00:08:41.090 "zoned": false, 00:08:41.090 "supported_io_types": { 00:08:41.090 "read": true, 00:08:41.090 "write": true, 00:08:41.090 "unmap": true, 00:08:41.090 "flush": true, 00:08:41.090 "reset": true, 00:08:41.090 "nvme_admin": false, 00:08:41.090 "nvme_io": false, 00:08:41.090 "nvme_io_md": false, 00:08:41.090 "write_zeroes": true, 00:08:41.090 "zcopy": true, 00:08:41.090 "get_zone_info": false, 00:08:41.090 "zone_management": false, 00:08:41.090 "zone_append": false, 00:08:41.090 "compare": false, 00:08:41.090 "compare_and_write": false, 00:08:41.090 "abort": true, 00:08:41.090 "seek_hole": false, 00:08:41.090 "seek_data": false, 00:08:41.090 "copy": true, 00:08:41.090 "nvme_iov_md": false 00:08:41.090 }, 00:08:41.090 "memory_domains": [ 00:08:41.090 { 00:08:41.090 "dma_device_id": "system", 00:08:41.090 "dma_device_type": 1 00:08:41.090 }, 00:08:41.090 { 00:08:41.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.090 "dma_device_type": 2 00:08:41.090 } 00:08:41.090 ], 00:08:41.090 "driver_specific": { 00:08:41.090 "passthru": { 00:08:41.090 "name": "Passthru0", 00:08:41.090 "base_bdev_name": "Malloc0" 00:08:41.090 } 00:08:41.090 } 00:08:41.090 } 00:08:41.090 ]' 00:08:41.090 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:41.350 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:41.350 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.350 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.350 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.350 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:41.350 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:41.350 10:37:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:41.350 ************************************ 00:08:41.350 END TEST rpc_integrity 00:08:41.350 ************************************ 00:08:41.350 00:08:41.350 real 0m0.347s 00:08:41.350 user 0m0.213s 00:08:41.350 sys 0m0.036s 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.350 10:37:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.350 10:37:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:41.350 10:37:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:41.350 10:37:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.350 10:37:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.350 ************************************ 00:08:41.350 START TEST rpc_plugins 00:08:41.350 ************************************ 00:08:41.350 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:08:41.350 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:41.350 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.350 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.350 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.350 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:41.350 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:41.350 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.350 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.350 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.350 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:41.350 { 00:08:41.350 "name": "Malloc1", 00:08:41.350 "aliases": [ 00:08:41.350 "b6d473fa-be99-4e60-bce9-2793fda73000" 00:08:41.350 ], 00:08:41.350 "product_name": "Malloc disk", 00:08:41.350 "block_size": 4096, 00:08:41.350 "num_blocks": 256, 00:08:41.350 "uuid": "b6d473fa-be99-4e60-bce9-2793fda73000", 00:08:41.350 "assigned_rate_limits": { 00:08:41.350 "rw_ios_per_sec": 0, 00:08:41.350 "rw_mbytes_per_sec": 0, 00:08:41.350 "r_mbytes_per_sec": 0, 00:08:41.350 "w_mbytes_per_sec": 0 00:08:41.350 }, 00:08:41.350 "claimed": false, 00:08:41.350 "zoned": false, 00:08:41.350 "supported_io_types": { 00:08:41.350 "read": true, 00:08:41.350 "write": true, 00:08:41.350 "unmap": true, 00:08:41.350 "flush": true, 00:08:41.350 "reset": true, 00:08:41.350 "nvme_admin": false, 00:08:41.350 "nvme_io": false, 00:08:41.350 "nvme_io_md": false, 00:08:41.350 "write_zeroes": true, 00:08:41.350 "zcopy": true, 00:08:41.350 "get_zone_info": false, 00:08:41.350 "zone_management": false, 00:08:41.350 "zone_append": false, 00:08:41.350 "compare": false, 00:08:41.350 "compare_and_write": false, 00:08:41.350 "abort": true, 00:08:41.350 "seek_hole": false, 00:08:41.350 "seek_data": false, 00:08:41.350 "copy": true, 00:08:41.350 "nvme_iov_md": false 00:08:41.350 }, 00:08:41.350 "memory_domains": [ 00:08:41.350 { 00:08:41.350 "dma_device_id": "system", 00:08:41.350 "dma_device_type": 1 00:08:41.350 }, 00:08:41.350 { 00:08:41.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.350 "dma_device_type": 2 00:08:41.350 } 00:08:41.350 ], 00:08:41.350 "driver_specific": {} 00:08:41.350 } 00:08:41.350 ]' 00:08:41.350 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:41.350 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:41.350 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:41.350 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.350 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.609 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.609 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:41.609 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.609 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.609 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.609 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:41.609 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:41.609 ************************************ 00:08:41.609 END TEST rpc_plugins 00:08:41.609 ************************************ 00:08:41.609 10:37:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:41.609 00:08:41.609 real 0m0.168s 00:08:41.609 user 0m0.106s 00:08:41.609 sys 0m0.021s 00:08:41.609 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.609 10:37:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:41.609 10:37:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:41.609 10:37:02 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:41.609 10:37:02 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.609 10:37:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.609 ************************************ 00:08:41.609 START TEST rpc_trace_cmd_test 00:08:41.609 ************************************ 00:08:41.609 10:37:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:08:41.609 10:37:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:41.609 10:37:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:41.609 10:37:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.609 10:37:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.609 10:37:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.609 10:37:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:41.609 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57052", 00:08:41.609 "tpoint_group_mask": "0x8", 00:08:41.609 "iscsi_conn": { 00:08:41.609 "mask": "0x2", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "scsi": { 00:08:41.609 "mask": "0x4", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "bdev": { 00:08:41.609 "mask": "0x8", 00:08:41.609 "tpoint_mask": "0xffffffffffffffff" 00:08:41.609 }, 00:08:41.609 "nvmf_rdma": { 00:08:41.609 "mask": "0x10", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "nvmf_tcp": { 00:08:41.609 "mask": "0x20", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "ftl": { 00:08:41.609 "mask": "0x40", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "blobfs": { 00:08:41.609 "mask": "0x80", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "dsa": { 00:08:41.609 "mask": "0x200", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "thread": { 00:08:41.609 "mask": "0x400", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "nvme_pcie": { 00:08:41.609 "mask": "0x800", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "iaa": { 00:08:41.609 "mask": "0x1000", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.609 "nvme_tcp": { 00:08:41.609 "mask": "0x2000", 00:08:41.609 "tpoint_mask": "0x0" 00:08:41.609 }, 00:08:41.610 "bdev_nvme": { 00:08:41.610 "mask": "0x4000", 00:08:41.610 "tpoint_mask": "0x0" 00:08:41.610 }, 00:08:41.610 "sock": { 00:08:41.610 "mask": "0x8000", 00:08:41.610 "tpoint_mask": "0x0" 00:08:41.610 }, 00:08:41.610 "blob": { 00:08:41.610 "mask": "0x10000", 00:08:41.610 "tpoint_mask": "0x0" 00:08:41.610 }, 00:08:41.610 "bdev_raid": { 00:08:41.610 "mask": "0x20000", 00:08:41.610 "tpoint_mask": "0x0" 00:08:41.610 }, 00:08:41.610 "scheduler": { 00:08:41.610 "mask": "0x40000", 00:08:41.610 "tpoint_mask": "0x0" 00:08:41.610 } 00:08:41.610 }' 00:08:41.610 10:37:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:41.610 10:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:41.610 10:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:41.610 10:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:41.610 10:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:41.868 10:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:41.868 10:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:41.868 10:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:41.868 10:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:41.868 ************************************ 00:08:41.868 END TEST rpc_trace_cmd_test 00:08:41.868 ************************************ 00:08:41.868 10:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:41.868 00:08:41.868 real 0m0.275s 00:08:41.868 user 0m0.235s 00:08:41.868 sys 0m0.031s 00:08:41.868 10:37:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.868 10:37:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.868 10:37:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:41.868 10:37:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:41.868 10:37:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:41.868 10:37:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:41.868 10:37:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.868 10:37:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.868 ************************************ 00:08:41.868 START TEST rpc_daemon_integrity 00:08:41.868 ************************************ 00:08:41.868 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:08:41.868 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:41.868 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.868 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:41.868 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.868 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:41.868 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:42.126 { 00:08:42.126 "name": "Malloc2", 00:08:42.126 "aliases": [ 00:08:42.126 "9c328f50-1a7a-44e3-860c-c8571d6d1d5c" 00:08:42.126 ], 00:08:42.126 "product_name": "Malloc disk", 00:08:42.126 "block_size": 512, 00:08:42.126 "num_blocks": 16384, 00:08:42.126 "uuid": "9c328f50-1a7a-44e3-860c-c8571d6d1d5c", 00:08:42.126 "assigned_rate_limits": { 00:08:42.126 "rw_ios_per_sec": 0, 00:08:42.126 "rw_mbytes_per_sec": 0, 00:08:42.126 "r_mbytes_per_sec": 0, 00:08:42.126 "w_mbytes_per_sec": 0 00:08:42.126 }, 00:08:42.126 "claimed": false, 00:08:42.126 "zoned": false, 00:08:42.126 "supported_io_types": { 00:08:42.126 "read": true, 00:08:42.126 "write": true, 00:08:42.126 "unmap": true, 00:08:42.126 "flush": true, 00:08:42.126 "reset": true, 00:08:42.126 "nvme_admin": false, 00:08:42.126 "nvme_io": false, 00:08:42.126 "nvme_io_md": false, 00:08:42.126 "write_zeroes": true, 00:08:42.126 "zcopy": true, 00:08:42.126 "get_zone_info": false, 00:08:42.126 "zone_management": false, 00:08:42.126 "zone_append": false, 00:08:42.126 "compare": false, 00:08:42.126 "compare_and_write": false, 00:08:42.126 "abort": true, 00:08:42.126 "seek_hole": false, 00:08:42.126 "seek_data": false, 00:08:42.126 "copy": true, 00:08:42.126 "nvme_iov_md": false 00:08:42.126 }, 00:08:42.126 "memory_domains": [ 00:08:42.126 { 00:08:42.126 "dma_device_id": "system", 00:08:42.126 "dma_device_type": 1 00:08:42.126 }, 00:08:42.126 { 00:08:42.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.126 "dma_device_type": 2 00:08:42.126 } 00:08:42.126 ], 00:08:42.126 "driver_specific": {} 00:08:42.126 } 00:08:42.126 ]' 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.126 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.126 [2024-10-30 10:37:03.448380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:42.127 [2024-10-30 10:37:03.448675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.127 [2024-10-30 10:37:03.448717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:42.127 [2024-10-30 10:37:03.448737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.127 [2024-10-30 10:37:03.451844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.127 [2024-10-30 10:37:03.452012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:42.127 Passthru0 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:42.127 { 00:08:42.127 "name": "Malloc2", 00:08:42.127 "aliases": [ 00:08:42.127 "9c328f50-1a7a-44e3-860c-c8571d6d1d5c" 00:08:42.127 ], 00:08:42.127 "product_name": "Malloc disk", 00:08:42.127 "block_size": 512, 00:08:42.127 "num_blocks": 16384, 00:08:42.127 "uuid": "9c328f50-1a7a-44e3-860c-c8571d6d1d5c", 00:08:42.127 "assigned_rate_limits": { 00:08:42.127 "rw_ios_per_sec": 0, 00:08:42.127 "rw_mbytes_per_sec": 0, 00:08:42.127 "r_mbytes_per_sec": 0, 00:08:42.127 "w_mbytes_per_sec": 0 00:08:42.127 }, 00:08:42.127 "claimed": true, 00:08:42.127 "claim_type": "exclusive_write", 00:08:42.127 "zoned": false, 00:08:42.127 "supported_io_types": { 00:08:42.127 "read": true, 00:08:42.127 "write": true, 00:08:42.127 "unmap": true, 00:08:42.127 "flush": true, 00:08:42.127 "reset": true, 00:08:42.127 "nvme_admin": false, 00:08:42.127 "nvme_io": false, 00:08:42.127 "nvme_io_md": false, 00:08:42.127 "write_zeroes": true, 00:08:42.127 "zcopy": true, 00:08:42.127 "get_zone_info": false, 00:08:42.127 "zone_management": false, 00:08:42.127 "zone_append": false, 00:08:42.127 "compare": false, 00:08:42.127 "compare_and_write": false, 00:08:42.127 "abort": true, 00:08:42.127 "seek_hole": false, 00:08:42.127 "seek_data": false, 00:08:42.127 "copy": true, 00:08:42.127 "nvme_iov_md": false 00:08:42.127 }, 00:08:42.127 "memory_domains": [ 00:08:42.127 { 00:08:42.127 "dma_device_id": "system", 00:08:42.127 "dma_device_type": 1 00:08:42.127 }, 00:08:42.127 { 00:08:42.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.127 "dma_device_type": 2 00:08:42.127 } 00:08:42.127 ], 00:08:42.127 "driver_specific": {} 00:08:42.127 }, 00:08:42.127 { 00:08:42.127 "name": "Passthru0", 00:08:42.127 "aliases": [ 00:08:42.127 "8e3beb23-1e20-5841-8387-ef5b0a82df62" 00:08:42.127 ], 00:08:42.127 "product_name": "passthru", 00:08:42.127 "block_size": 512, 00:08:42.127 "num_blocks": 16384, 00:08:42.127 "uuid": "8e3beb23-1e20-5841-8387-ef5b0a82df62", 00:08:42.127 "assigned_rate_limits": { 00:08:42.127 "rw_ios_per_sec": 0, 00:08:42.127 "rw_mbytes_per_sec": 0, 00:08:42.127 "r_mbytes_per_sec": 0, 00:08:42.127 "w_mbytes_per_sec": 0 00:08:42.127 }, 00:08:42.127 "claimed": false, 00:08:42.127 "zoned": false, 00:08:42.127 "supported_io_types": { 00:08:42.127 "read": true, 00:08:42.127 "write": true, 00:08:42.127 "unmap": true, 00:08:42.127 "flush": true, 00:08:42.127 "reset": true, 00:08:42.127 "nvme_admin": false, 00:08:42.127 "nvme_io": false, 00:08:42.127 "nvme_io_md": false, 00:08:42.127 "write_zeroes": true, 00:08:42.127 "zcopy": true, 00:08:42.127 "get_zone_info": false, 00:08:42.127 "zone_management": false, 00:08:42.127 "zone_append": false, 00:08:42.127 "compare": false, 00:08:42.127 "compare_and_write": false, 00:08:42.127 "abort": true, 00:08:42.127 "seek_hole": false, 00:08:42.127 "seek_data": false, 00:08:42.127 "copy": true, 00:08:42.127 "nvme_iov_md": false 00:08:42.127 }, 00:08:42.127 "memory_domains": [ 00:08:42.127 { 00:08:42.127 "dma_device_id": "system", 00:08:42.127 "dma_device_type": 1 00:08:42.127 }, 00:08:42.127 { 00:08:42.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.127 "dma_device_type": 2 00:08:42.127 } 00:08:42.127 ], 00:08:42.127 "driver_specific": { 00:08:42.127 "passthru": { 00:08:42.127 "name": "Passthru0", 00:08:42.127 "base_bdev_name": "Malloc2" 00:08:42.127 } 00:08:42.127 } 00:08:42.127 } 00:08:42.127 ]' 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.127 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:42.385 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:42.385 ************************************ 00:08:42.385 END TEST rpc_daemon_integrity 00:08:42.385 ************************************ 00:08:42.385 10:37:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:42.385 00:08:42.385 real 0m0.373s 00:08:42.385 user 0m0.224s 00:08:42.385 sys 0m0.051s 00:08:42.385 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.385 10:37:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:42.385 10:37:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:42.385 10:37:03 rpc -- rpc/rpc.sh@84 -- # killprocess 57052 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@952 -- # '[' -z 57052 ']' 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@956 -- # kill -0 57052 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@957 -- # uname 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57052 00:08:42.385 killing process with pid 57052 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57052' 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@971 -- # kill 57052 00:08:42.385 10:37:03 rpc -- common/autotest_common.sh@976 -- # wait 57052 00:08:44.937 00:08:44.937 real 0m5.354s 00:08:44.937 user 0m6.058s 00:08:44.937 sys 0m0.939s 00:08:44.937 10:37:06 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:44.937 ************************************ 00:08:44.937 END TEST rpc 00:08:44.937 ************************************ 00:08:44.937 10:37:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.937 10:37:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:44.937 10:37:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:44.937 10:37:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.937 10:37:06 -- common/autotest_common.sh@10 -- # set +x 00:08:44.937 ************************************ 00:08:44.937 START TEST skip_rpc 00:08:44.937 ************************************ 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:44.937 * Looking for test storage... 00:08:44.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.937 10:37:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:44.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.937 --rc genhtml_branch_coverage=1 00:08:44.937 --rc genhtml_function_coverage=1 00:08:44.937 --rc genhtml_legend=1 00:08:44.937 --rc geninfo_all_blocks=1 00:08:44.937 --rc geninfo_unexecuted_blocks=1 00:08:44.937 00:08:44.937 ' 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:44.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.937 --rc genhtml_branch_coverage=1 00:08:44.937 --rc genhtml_function_coverage=1 00:08:44.937 --rc genhtml_legend=1 00:08:44.937 --rc geninfo_all_blocks=1 00:08:44.937 --rc geninfo_unexecuted_blocks=1 00:08:44.937 00:08:44.937 ' 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:44.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.937 --rc genhtml_branch_coverage=1 00:08:44.937 --rc genhtml_function_coverage=1 00:08:44.937 --rc genhtml_legend=1 00:08:44.937 --rc geninfo_all_blocks=1 00:08:44.937 --rc geninfo_unexecuted_blocks=1 00:08:44.937 00:08:44.937 ' 00:08:44.937 10:37:06 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:44.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.937 --rc genhtml_branch_coverage=1 00:08:44.937 --rc genhtml_function_coverage=1 00:08:44.937 --rc genhtml_legend=1 00:08:44.937 --rc geninfo_all_blocks=1 00:08:44.937 --rc geninfo_unexecuted_blocks=1 00:08:44.937 00:08:44.937 ' 00:08:44.937 10:37:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:44.938 10:37:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:44.938 10:37:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:44.938 10:37:06 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:44.938 10:37:06 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:44.938 10:37:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.197 ************************************ 00:08:45.197 START TEST skip_rpc 00:08:45.197 ************************************ 00:08:45.197 10:37:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:08:45.197 10:37:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57281 00:08:45.197 10:37:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:45.197 10:37:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.197 10:37:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:45.197 [2024-10-30 10:37:06.547700] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:08:45.197 [2024-10-30 10:37:06.549039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57281 ] 00:08:45.456 [2024-10-30 10:37:06.732906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.456 [2024-10-30 10:37:06.860084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57281 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57281 ']' 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57281 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57281 00:08:50.727 killing process with pid 57281 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57281' 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57281 00:08:50.727 10:37:11 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57281 00:08:52.634 00:08:52.634 real 0m7.264s 00:08:52.634 user 0m6.693s 00:08:52.634 sys 0m0.459s 00:08:52.634 10:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:52.634 10:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.634 ************************************ 00:08:52.634 END TEST skip_rpc 00:08:52.634 ************************************ 00:08:52.634 10:37:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:52.634 10:37:13 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:52.634 10:37:13 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:52.634 10:37:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.634 ************************************ 00:08:52.634 START TEST skip_rpc_with_json 00:08:52.634 ************************************ 00:08:52.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57385 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57385 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57385 ']' 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.634 10:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:52.634 [2024-10-30 10:37:13.856829] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:08:52.634 [2024-10-30 10:37:13.857057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57385 ] 00:08:52.634 [2024-10-30 10:37:14.042833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.892 [2024-10-30 10:37:14.173994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.829 [2024-10-30 10:37:15.050522] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:53.829 request: 00:08:53.829 { 00:08:53.829 "trtype": "tcp", 00:08:53.829 "method": "nvmf_get_transports", 00:08:53.829 "req_id": 1 00:08:53.829 } 00:08:53.829 Got JSON-RPC error response 00:08:53.829 response: 00:08:53.829 { 00:08:53.829 "code": -19, 00:08:53.829 "message": "No such device" 00:08:53.829 } 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.829 [2024-10-30 10:37:15.062685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.829 10:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:53.829 { 00:08:53.829 "subsystems": [ 00:08:53.829 { 00:08:53.829 "subsystem": "fsdev", 00:08:53.829 "config": [ 00:08:53.829 { 00:08:53.829 "method": "fsdev_set_opts", 00:08:53.829 "params": { 00:08:53.829 "fsdev_io_pool_size": 65535, 00:08:53.829 "fsdev_io_cache_size": 256 00:08:53.829 } 00:08:53.829 } 00:08:53.829 ] 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "subsystem": "keyring", 00:08:53.829 "config": [] 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "subsystem": "iobuf", 00:08:53.829 "config": [ 00:08:53.829 { 00:08:53.829 "method": "iobuf_set_options", 00:08:53.829 "params": { 00:08:53.829 "small_pool_count": 8192, 00:08:53.829 "large_pool_count": 1024, 00:08:53.829 "small_bufsize": 8192, 00:08:53.829 "large_bufsize": 135168, 00:08:53.829 "enable_numa": false 00:08:53.829 } 00:08:53.829 } 00:08:53.829 ] 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "subsystem": "sock", 00:08:53.829 "config": [ 00:08:53.829 { 00:08:53.829 "method": "sock_set_default_impl", 00:08:53.829 "params": { 00:08:53.829 "impl_name": "posix" 00:08:53.829 } 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "method": "sock_impl_set_options", 00:08:53.829 "params": { 00:08:53.829 "impl_name": "ssl", 00:08:53.829 "recv_buf_size": 4096, 00:08:53.829 "send_buf_size": 4096, 00:08:53.829 "enable_recv_pipe": true, 00:08:53.829 "enable_quickack": false, 00:08:53.829 "enable_placement_id": 0, 00:08:53.829 "enable_zerocopy_send_server": true, 00:08:53.829 "enable_zerocopy_send_client": false, 00:08:53.829 "zerocopy_threshold": 0, 00:08:53.829 "tls_version": 0, 00:08:53.829 "enable_ktls": false 00:08:53.829 } 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "method": "sock_impl_set_options", 00:08:53.829 "params": { 00:08:53.829 "impl_name": "posix", 00:08:53.829 "recv_buf_size": 2097152, 00:08:53.829 "send_buf_size": 2097152, 00:08:53.829 "enable_recv_pipe": true, 00:08:53.829 "enable_quickack": false, 00:08:53.829 "enable_placement_id": 0, 00:08:53.829 "enable_zerocopy_send_server": true, 00:08:53.829 "enable_zerocopy_send_client": false, 00:08:53.829 "zerocopy_threshold": 0, 00:08:53.829 "tls_version": 0, 00:08:53.829 "enable_ktls": false 00:08:53.829 } 00:08:53.830 } 00:08:53.830 ] 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "vmd", 00:08:53.830 "config": [] 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "accel", 00:08:53.830 "config": [ 00:08:53.830 { 00:08:53.830 "method": "accel_set_options", 00:08:53.830 "params": { 00:08:53.830 "small_cache_size": 128, 00:08:53.830 "large_cache_size": 16, 00:08:53.830 "task_count": 2048, 00:08:53.830 "sequence_count": 2048, 00:08:53.830 "buf_count": 2048 00:08:53.830 } 00:08:53.830 } 00:08:53.830 ] 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "bdev", 00:08:53.830 "config": [ 00:08:53.830 { 00:08:53.830 "method": "bdev_set_options", 00:08:53.830 "params": { 00:08:53.830 "bdev_io_pool_size": 65535, 00:08:53.830 "bdev_io_cache_size": 256, 00:08:53.830 "bdev_auto_examine": true, 00:08:53.830 "iobuf_small_cache_size": 128, 00:08:53.830 "iobuf_large_cache_size": 16 00:08:53.830 } 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "method": "bdev_raid_set_options", 00:08:53.830 "params": { 00:08:53.830 "process_window_size_kb": 1024, 00:08:53.830 "process_max_bandwidth_mb_sec": 0 00:08:53.830 } 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "method": "bdev_iscsi_set_options", 00:08:53.830 "params": { 00:08:53.830 "timeout_sec": 30 00:08:53.830 } 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "method": "bdev_nvme_set_options", 00:08:53.830 "params": { 00:08:53.830 "action_on_timeout": "none", 00:08:53.830 "timeout_us": 0, 00:08:53.830 "timeout_admin_us": 0, 00:08:53.830 "keep_alive_timeout_ms": 10000, 00:08:53.830 "arbitration_burst": 0, 00:08:53.830 "low_priority_weight": 0, 00:08:53.830 "medium_priority_weight": 0, 00:08:53.830 "high_priority_weight": 0, 00:08:53.830 "nvme_adminq_poll_period_us": 10000, 00:08:53.830 "nvme_ioq_poll_period_us": 0, 00:08:53.830 "io_queue_requests": 0, 00:08:53.830 "delay_cmd_submit": true, 00:08:53.830 "transport_retry_count": 4, 00:08:53.830 "bdev_retry_count": 3, 00:08:53.830 "transport_ack_timeout": 0, 00:08:53.830 "ctrlr_loss_timeout_sec": 0, 00:08:53.830 "reconnect_delay_sec": 0, 00:08:53.830 "fast_io_fail_timeout_sec": 0, 00:08:53.830 "disable_auto_failback": false, 00:08:53.830 "generate_uuids": false, 00:08:53.830 "transport_tos": 0, 00:08:53.830 "nvme_error_stat": false, 00:08:53.830 "rdma_srq_size": 0, 00:08:53.830 "io_path_stat": false, 00:08:53.830 "allow_accel_sequence": false, 00:08:53.830 "rdma_max_cq_size": 0, 00:08:53.830 "rdma_cm_event_timeout_ms": 0, 00:08:53.830 "dhchap_digests": [ 00:08:53.830 "sha256", 00:08:53.830 "sha384", 00:08:53.830 "sha512" 00:08:53.830 ], 00:08:53.830 "dhchap_dhgroups": [ 00:08:53.830 "null", 00:08:53.830 "ffdhe2048", 00:08:53.830 "ffdhe3072", 00:08:53.830 "ffdhe4096", 00:08:53.830 "ffdhe6144", 00:08:53.830 "ffdhe8192" 00:08:53.830 ] 00:08:53.830 } 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "method": "bdev_nvme_set_hotplug", 00:08:53.830 "params": { 00:08:53.830 "period_us": 100000, 00:08:53.830 "enable": false 00:08:53.830 } 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "method": "bdev_wait_for_examine" 00:08:53.830 } 00:08:53.830 ] 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "scsi", 00:08:53.830 "config": null 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "scheduler", 00:08:53.830 "config": [ 00:08:53.830 { 00:08:53.830 "method": "framework_set_scheduler", 00:08:53.830 "params": { 00:08:53.830 "name": "static" 00:08:53.830 } 00:08:53.830 } 00:08:53.830 ] 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "vhost_scsi", 00:08:53.830 "config": [] 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "vhost_blk", 00:08:53.830 "config": [] 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "ublk", 00:08:53.830 "config": [] 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "nbd", 00:08:53.830 "config": [] 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "subsystem": "nvmf", 00:08:53.830 "config": [ 00:08:53.830 { 00:08:53.830 "method": "nvmf_set_config", 00:08:53.830 "params": { 00:08:53.830 "discovery_filter": "match_any", 00:08:53.830 "admin_cmd_passthru": { 00:08:53.830 "identify_ctrlr": false 00:08:53.830 }, 00:08:53.830 "dhchap_digests": [ 00:08:53.830 "sha256", 00:08:53.830 "sha384", 00:08:53.830 "sha512" 00:08:53.830 ], 00:08:53.830 "dhchap_dhgroups": [ 00:08:53.830 "null", 00:08:53.830 "ffdhe2048", 00:08:53.830 "ffdhe3072", 00:08:53.830 "ffdhe4096", 00:08:53.830 "ffdhe6144", 00:08:53.830 "ffdhe8192" 00:08:53.830 ] 00:08:53.830 } 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "method": "nvmf_set_max_subsystems", 00:08:53.830 "params": { 00:08:53.830 "max_subsystems": 1024 00:08:53.830 } 00:08:53.830 }, 00:08:53.830 { 00:08:53.830 "method": "nvmf_set_crdt", 00:08:53.830 "params": { 00:08:53.830 "crdt1": 0, 00:08:53.830 "crdt2": 0, 00:08:53.830 "crdt3": 0 00:08:53.830 } 00:08:53.831 }, 00:08:53.831 { 00:08:53.831 "method": "nvmf_create_transport", 00:08:53.831 "params": { 00:08:53.831 "trtype": "TCP", 00:08:53.831 "max_queue_depth": 128, 00:08:53.831 "max_io_qpairs_per_ctrlr": 127, 00:08:53.831 "in_capsule_data_size": 4096, 00:08:53.831 "max_io_size": 131072, 00:08:53.831 "io_unit_size": 131072, 00:08:53.831 "max_aq_depth": 128, 00:08:53.831 "num_shared_buffers": 511, 00:08:53.831 "buf_cache_size": 4294967295, 00:08:53.831 "dif_insert_or_strip": false, 00:08:53.831 "zcopy": false, 00:08:53.831 "c2h_success": true, 00:08:53.831 "sock_priority": 0, 00:08:53.831 "abort_timeout_sec": 1, 00:08:53.831 "ack_timeout": 0, 00:08:53.831 "data_wr_pool_size": 0 00:08:53.831 } 00:08:53.831 } 00:08:53.831 ] 00:08:53.831 }, 00:08:53.831 { 00:08:53.831 "subsystem": "iscsi", 00:08:53.831 "config": [ 00:08:53.831 { 00:08:53.831 "method": "iscsi_set_options", 00:08:53.831 "params": { 00:08:53.831 "node_base": "iqn.2016-06.io.spdk", 00:08:53.831 "max_sessions": 128, 00:08:53.831 "max_connections_per_session": 2, 00:08:53.831 "max_queue_depth": 64, 00:08:53.831 "default_time2wait": 2, 00:08:53.831 "default_time2retain": 20, 00:08:53.831 "first_burst_length": 8192, 00:08:53.831 "immediate_data": true, 00:08:53.831 "allow_duplicated_isid": false, 00:08:53.831 "error_recovery_level": 0, 00:08:53.831 "nop_timeout": 60, 00:08:53.831 "nop_in_interval": 30, 00:08:53.831 "disable_chap": false, 00:08:53.831 "require_chap": false, 00:08:53.831 "mutual_chap": false, 00:08:53.831 "chap_group": 0, 00:08:53.831 "max_large_datain_per_connection": 64, 00:08:53.831 "max_r2t_per_connection": 4, 00:08:53.831 "pdu_pool_size": 36864, 00:08:53.831 "immediate_data_pool_size": 16384, 00:08:53.831 "data_out_pool_size": 2048 00:08:53.831 } 00:08:53.831 } 00:08:53.831 ] 00:08:53.831 } 00:08:53.831 ] 00:08:53.831 } 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57385 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57385 ']' 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57385 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57385 00:08:53.831 killing process with pid 57385 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57385' 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57385 00:08:53.831 10:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57385 00:08:56.397 10:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57441 00:08:56.397 10:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:56.398 10:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57441 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57441 ']' 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57441 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57441 00:09:01.672 killing process with pid 57441 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57441' 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57441 00:09:01.672 10:37:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57441 00:09:03.606 10:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:03.606 10:37:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:03.606 ************************************ 00:09:03.606 END TEST skip_rpc_with_json 00:09:03.606 ************************************ 00:09:03.606 00:09:03.606 real 0m11.155s 00:09:03.606 user 0m10.556s 00:09:03.606 sys 0m1.049s 00:09:03.606 10:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.606 10:37:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:03.606 10:37:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:03.607 10:37:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.607 10:37:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.607 10:37:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.607 ************************************ 00:09:03.607 START TEST skip_rpc_with_delay 00:09:03.607 ************************************ 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:03.607 10:37:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:03.607 [2024-10-30 10:37:25.075271] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:03.866 ************************************ 00:09:03.866 END TEST skip_rpc_with_delay 00:09:03.866 ************************************ 00:09:03.866 10:37:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:03.866 10:37:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:03.866 10:37:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:03.866 10:37:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:03.866 00:09:03.866 real 0m0.196s 00:09:03.866 user 0m0.104s 00:09:03.867 sys 0m0.090s 00:09:03.867 10:37:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.867 10:37:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:03.867 10:37:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:03.867 10:37:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:03.867 10:37:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:03.867 10:37:25 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:03.867 10:37:25 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.867 10:37:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.867 ************************************ 00:09:03.867 START TEST exit_on_failed_rpc_init 00:09:03.867 ************************************ 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57569 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57569 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57569 ']' 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:03.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:03.867 10:37:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:03.867 [2024-10-30 10:37:25.319348] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:03.867 [2024-10-30 10:37:25.319529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57569 ] 00:09:04.126 [2024-10-30 10:37:25.503588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.384 [2024-10-30 10:37:25.658732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:05.320 10:37:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:05.320 [2024-10-30 10:37:26.716507] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:05.320 [2024-10-30 10:37:26.716682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57598 ] 00:09:05.578 [2024-10-30 10:37:26.903011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.836 [2024-10-30 10:37:27.058281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.836 [2024-10-30 10:37:27.058410] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:05.836 [2024-10-30 10:37:27.058438] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:05.836 [2024-10-30 10:37:27.058470] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57569 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57569 ']' 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57569 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57569 00:09:06.094 killing process with pid 57569 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57569' 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57569 00:09:06.094 10:37:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57569 00:09:08.624 00:09:08.624 real 0m4.373s 00:09:08.624 user 0m4.748s 00:09:08.624 sys 0m0.687s 00:09:08.624 ************************************ 00:09:08.624 END TEST exit_on_failed_rpc_init 00:09:08.624 ************************************ 00:09:08.624 10:37:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.624 10:37:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:08.624 10:37:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:08.624 ************************************ 00:09:08.624 END TEST skip_rpc 00:09:08.624 ************************************ 00:09:08.624 00:09:08.624 real 0m23.395s 00:09:08.624 user 0m22.299s 00:09:08.624 sys 0m2.480s 00:09:08.624 10:37:29 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.624 10:37:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.624 10:37:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:08.624 10:37:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:08.624 10:37:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.624 10:37:29 -- common/autotest_common.sh@10 -- # set +x 00:09:08.624 ************************************ 00:09:08.624 START TEST rpc_client 00:09:08.624 ************************************ 00:09:08.624 10:37:29 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:08.624 * Looking for test storage... 00:09:08.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:08.624 10:37:29 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:08.624 10:37:29 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:09:08.624 10:37:29 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:08.624 10:37:29 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.624 10:37:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.625 10:37:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:08.625 10:37:29 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.625 10:37:29 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:08.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.625 --rc genhtml_branch_coverage=1 00:09:08.625 --rc genhtml_function_coverage=1 00:09:08.625 --rc genhtml_legend=1 00:09:08.625 --rc geninfo_all_blocks=1 00:09:08.625 --rc geninfo_unexecuted_blocks=1 00:09:08.625 00:09:08.625 ' 00:09:08.625 10:37:29 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:08.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.625 --rc genhtml_branch_coverage=1 00:09:08.625 --rc genhtml_function_coverage=1 00:09:08.625 --rc genhtml_legend=1 00:09:08.625 --rc geninfo_all_blocks=1 00:09:08.625 --rc geninfo_unexecuted_blocks=1 00:09:08.625 00:09:08.625 ' 00:09:08.625 10:37:29 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:08.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.625 --rc genhtml_branch_coverage=1 00:09:08.625 --rc genhtml_function_coverage=1 00:09:08.625 --rc genhtml_legend=1 00:09:08.625 --rc geninfo_all_blocks=1 00:09:08.625 --rc geninfo_unexecuted_blocks=1 00:09:08.625 00:09:08.625 ' 00:09:08.625 10:37:29 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:08.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.625 --rc genhtml_branch_coverage=1 00:09:08.625 --rc genhtml_function_coverage=1 00:09:08.625 --rc genhtml_legend=1 00:09:08.625 --rc geninfo_all_blocks=1 00:09:08.625 --rc geninfo_unexecuted_blocks=1 00:09:08.625 00:09:08.625 ' 00:09:08.625 10:37:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:08.625 OK 00:09:08.625 10:37:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:08.625 00:09:08.625 real 0m0.223s 00:09:08.625 user 0m0.124s 00:09:08.625 sys 0m0.110s 00:09:08.625 10:37:29 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.625 10:37:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:08.625 ************************************ 00:09:08.625 END TEST rpc_client 00:09:08.625 ************************************ 00:09:08.625 10:37:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:08.625 10:37:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:08.625 10:37:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.625 10:37:29 -- common/autotest_common.sh@10 -- # set +x 00:09:08.625 ************************************ 00:09:08.625 START TEST json_config 00:09:08.625 ************************************ 00:09:08.625 10:37:29 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:08.625 10:37:29 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:08.625 10:37:29 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:09:08.625 10:37:29 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:08.625 10:37:30 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:08.625 10:37:30 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.625 10:37:30 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.625 10:37:30 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.625 10:37:30 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.625 10:37:30 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.625 10:37:30 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.625 10:37:30 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.625 10:37:30 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.625 10:37:30 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.625 10:37:30 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.625 10:37:30 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.625 10:37:30 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:08.625 10:37:30 json_config -- scripts/common.sh@345 -- # : 1 00:09:08.625 10:37:30 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.625 10:37:30 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.625 10:37:30 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:08.625 10:37:30 json_config -- scripts/common.sh@353 -- # local d=1 00:09:08.625 10:37:30 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.625 10:37:30 json_config -- scripts/common.sh@355 -- # echo 1 00:09:08.625 10:37:30 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.625 10:37:30 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:08.625 10:37:30 json_config -- scripts/common.sh@353 -- # local d=2 00:09:08.625 10:37:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.625 10:37:30 json_config -- scripts/common.sh@355 -- # echo 2 00:09:08.625 10:37:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.625 10:37:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.625 10:37:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.625 10:37:30 json_config -- scripts/common.sh@368 -- # return 0 00:09:08.625 10:37:30 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.625 10:37:30 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:08.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.625 --rc genhtml_branch_coverage=1 00:09:08.625 --rc genhtml_function_coverage=1 00:09:08.625 --rc genhtml_legend=1 00:09:08.625 --rc geninfo_all_blocks=1 00:09:08.625 --rc geninfo_unexecuted_blocks=1 00:09:08.625 00:09:08.625 ' 00:09:08.625 10:37:30 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:08.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.625 --rc genhtml_branch_coverage=1 00:09:08.625 --rc genhtml_function_coverage=1 00:09:08.625 --rc genhtml_legend=1 00:09:08.625 --rc geninfo_all_blocks=1 00:09:08.625 --rc geninfo_unexecuted_blocks=1 00:09:08.625 00:09:08.625 ' 00:09:08.625 10:37:30 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:08.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.625 --rc genhtml_branch_coverage=1 00:09:08.625 --rc genhtml_function_coverage=1 00:09:08.625 --rc genhtml_legend=1 00:09:08.625 --rc geninfo_all_blocks=1 00:09:08.625 --rc geninfo_unexecuted_blocks=1 00:09:08.625 00:09:08.625 ' 00:09:08.625 10:37:30 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:08.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.625 --rc genhtml_branch_coverage=1 00:09:08.625 --rc genhtml_function_coverage=1 00:09:08.625 --rc genhtml_legend=1 00:09:08.625 --rc geninfo_all_blocks=1 00:09:08.625 --rc geninfo_unexecuted_blocks=1 00:09:08.625 00:09:08.625 ' 00:09:08.625 10:37:30 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.625 10:37:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.885 10:37:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9fa3128a-707e-46f0-80ce-82e26fbba9c2 00:09:08.885 10:37:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9fa3128a-707e-46f0-80ce-82e26fbba9c2 00:09:08.885 10:37:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.885 10:37:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.885 10:37:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:08.885 10:37:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.885 10:37:30 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.885 10:37:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.885 10:37:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.885 10:37:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.885 10:37:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.885 10:37:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.885 10:37:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.885 10:37:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.885 10:37:30 json_config -- paths/export.sh@5 -- # export PATH 00:09:08.886 10:37:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@51 -- # : 0 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.886 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.886 10:37:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.886 WARNING: No tests are enabled so not running JSON configuration tests 00:09:08.886 10:37:30 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:08.886 10:37:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:08.886 10:37:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:08.886 10:37:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:08.886 10:37:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:08.886 10:37:30 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:08.886 10:37:30 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:08.886 ************************************ 00:09:08.886 END TEST json_config 00:09:08.886 ************************************ 00:09:08.886 00:09:08.886 real 0m0.190s 00:09:08.886 user 0m0.128s 00:09:08.886 sys 0m0.065s 00:09:08.886 10:37:30 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.886 10:37:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:08.886 10:37:30 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:08.886 10:37:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:08.886 10:37:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.886 10:37:30 -- common/autotest_common.sh@10 -- # set +x 00:09:08.886 ************************************ 00:09:08.886 START TEST json_config_extra_key 00:09:08.886 ************************************ 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.886 10:37:30 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:08.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.886 --rc genhtml_branch_coverage=1 00:09:08.886 --rc genhtml_function_coverage=1 00:09:08.886 --rc genhtml_legend=1 00:09:08.886 --rc geninfo_all_blocks=1 00:09:08.886 --rc geninfo_unexecuted_blocks=1 00:09:08.886 00:09:08.886 ' 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:08.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.886 --rc genhtml_branch_coverage=1 00:09:08.886 --rc genhtml_function_coverage=1 00:09:08.886 --rc genhtml_legend=1 00:09:08.886 --rc geninfo_all_blocks=1 00:09:08.886 --rc geninfo_unexecuted_blocks=1 00:09:08.886 00:09:08.886 ' 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:08.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.886 --rc genhtml_branch_coverage=1 00:09:08.886 --rc genhtml_function_coverage=1 00:09:08.886 --rc genhtml_legend=1 00:09:08.886 --rc geninfo_all_blocks=1 00:09:08.886 --rc geninfo_unexecuted_blocks=1 00:09:08.886 00:09:08.886 ' 00:09:08.886 10:37:30 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:08.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.886 --rc genhtml_branch_coverage=1 00:09:08.886 --rc genhtml_function_coverage=1 00:09:08.886 --rc genhtml_legend=1 00:09:08.886 --rc geninfo_all_blocks=1 00:09:08.886 --rc geninfo_unexecuted_blocks=1 00:09:08.886 00:09:08.886 ' 00:09:08.886 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.886 10:37:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:08.886 10:37:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.886 10:37:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9fa3128a-707e-46f0-80ce-82e26fbba9c2 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9fa3128a-707e-46f0-80ce-82e26fbba9c2 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.887 10:37:30 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.887 10:37:30 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.887 10:37:30 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.887 10:37:30 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.887 10:37:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.887 10:37:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.887 10:37:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.887 10:37:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:08.887 10:37:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.887 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.887 10:37:30 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.887 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:08.887 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:08.887 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:08.887 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:08.887 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:08.887 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:08.887 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:08.887 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:08.887 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:09.147 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:09.147 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:09.147 INFO: launching applications... 00:09:09.147 Waiting for target to run... 00:09:09.147 10:37:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57797 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57797 /var/tmp/spdk_tgt.sock 00:09:09.147 10:37:30 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:09.147 10:37:30 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57797 ']' 00:09:09.147 10:37:30 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:09.147 10:37:30 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:09.147 10:37:30 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:09.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:09.147 10:37:30 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:09.147 10:37:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:09.147 [2024-10-30 10:37:30.483450] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:09.147 [2024-10-30 10:37:30.483866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57797 ] 00:09:09.714 [2024-10-30 10:37:30.960652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.714 [2024-10-30 10:37:31.101509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.650 10:37:31 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:10.650 00:09:10.650 INFO: shutting down applications... 00:09:10.650 10:37:31 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:09:10.650 10:37:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:10.650 10:37:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:10.650 10:37:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:10.650 10:37:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:10.650 10:37:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:10.650 10:37:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57797 ]] 00:09:10.650 10:37:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57797 00:09:10.650 10:37:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:10.650 10:37:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:10.650 10:37:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:09:10.650 10:37:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:10.908 10:37:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:10.908 10:37:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:10.908 10:37:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:09:10.908 10:37:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:11.527 10:37:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:11.527 10:37:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:11.527 10:37:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:09:11.527 10:37:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:12.096 10:37:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:12.096 10:37:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:12.096 10:37:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:09:12.096 10:37:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:12.355 10:37:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:12.355 10:37:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:12.355 10:37:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:09:12.355 10:37:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:12.921 10:37:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:12.922 10:37:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:12.922 10:37:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:09:12.922 10:37:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:13.489 10:37:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:13.489 10:37:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:13.489 10:37:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57797 00:09:13.489 10:37:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:13.489 10:37:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:13.489 10:37:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:13.489 SPDK target shutdown done 00:09:13.489 Success 00:09:13.489 10:37:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:13.489 10:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:13.489 00:09:13.489 real 0m4.650s 00:09:13.489 user 0m4.006s 00:09:13.489 sys 0m0.644s 00:09:13.489 ************************************ 00:09:13.489 END TEST json_config_extra_key 00:09:13.489 ************************************ 00:09:13.489 10:37:34 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.489 10:37:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:13.489 10:37:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:13.489 10:37:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:13.489 10:37:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.489 10:37:34 -- common/autotest_common.sh@10 -- # set +x 00:09:13.489 ************************************ 00:09:13.489 START TEST alias_rpc 00:09:13.489 ************************************ 00:09:13.489 10:37:34 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:13.489 * Looking for test storage... 00:09:13.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:13.489 10:37:34 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:13.489 10:37:34 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:13.489 10:37:34 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:13.748 10:37:35 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:13.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.748 10:37:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:13.748 10:37:35 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.748 10:37:35 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:13.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.748 --rc genhtml_branch_coverage=1 00:09:13.749 --rc genhtml_function_coverage=1 00:09:13.749 --rc genhtml_legend=1 00:09:13.749 --rc geninfo_all_blocks=1 00:09:13.749 --rc geninfo_unexecuted_blocks=1 00:09:13.749 00:09:13.749 ' 00:09:13.749 10:37:35 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.749 --rc genhtml_branch_coverage=1 00:09:13.749 --rc genhtml_function_coverage=1 00:09:13.749 --rc genhtml_legend=1 00:09:13.749 --rc geninfo_all_blocks=1 00:09:13.749 --rc geninfo_unexecuted_blocks=1 00:09:13.749 00:09:13.749 ' 00:09:13.749 10:37:35 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.749 --rc genhtml_branch_coverage=1 00:09:13.749 --rc genhtml_function_coverage=1 00:09:13.749 --rc genhtml_legend=1 00:09:13.749 --rc geninfo_all_blocks=1 00:09:13.749 --rc geninfo_unexecuted_blocks=1 00:09:13.749 00:09:13.749 ' 00:09:13.749 10:37:35 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:13.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.749 --rc genhtml_branch_coverage=1 00:09:13.749 --rc genhtml_function_coverage=1 00:09:13.749 --rc genhtml_legend=1 00:09:13.749 --rc geninfo_all_blocks=1 00:09:13.749 --rc geninfo_unexecuted_blocks=1 00:09:13.749 00:09:13.749 ' 00:09:13.749 10:37:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:13.749 10:37:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57914 00:09:13.749 10:37:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:13.749 10:37:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57914 00:09:13.749 10:37:35 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57914 ']' 00:09:13.749 10:37:35 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.749 10:37:35 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:13.749 10:37:35 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.749 10:37:35 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:13.749 10:37:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.749 [2024-10-30 10:37:35.178906] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:13.749 [2024-10-30 10:37:35.179339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57914 ] 00:09:14.007 [2024-10-30 10:37:35.364137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.267 [2024-10-30 10:37:35.490060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.204 10:37:36 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:15.204 10:37:36 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:15.204 10:37:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:15.204 10:37:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57914 00:09:15.204 10:37:36 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57914 ']' 00:09:15.204 10:37:36 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57914 00:09:15.204 10:37:36 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:09:15.204 10:37:36 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.204 10:37:36 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57914 00:09:15.463 killing process with pid 57914 00:09:15.463 10:37:36 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:15.463 10:37:36 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:15.463 10:37:36 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57914' 00:09:15.463 10:37:36 alias_rpc -- common/autotest_common.sh@971 -- # kill 57914 00:09:15.463 10:37:36 alias_rpc -- common/autotest_common.sh@976 -- # wait 57914 00:09:17.995 ************************************ 00:09:17.995 END TEST alias_rpc 00:09:17.995 ************************************ 00:09:17.995 00:09:17.995 real 0m4.043s 00:09:17.995 user 0m4.153s 00:09:17.995 sys 0m0.601s 00:09:17.995 10:37:38 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.995 10:37:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:17.996 10:37:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:17.996 10:37:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:17.996 10:37:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:17.996 10:37:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.996 10:37:38 -- common/autotest_common.sh@10 -- # set +x 00:09:17.996 ************************************ 00:09:17.996 START TEST spdkcli_tcp 00:09:17.996 ************************************ 00:09:17.996 10:37:38 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:17.996 * Looking for test storage... 00:09:17.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.996 10:37:39 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:17.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.996 --rc genhtml_branch_coverage=1 00:09:17.996 --rc genhtml_function_coverage=1 00:09:17.996 --rc genhtml_legend=1 00:09:17.996 --rc geninfo_all_blocks=1 00:09:17.996 --rc geninfo_unexecuted_blocks=1 00:09:17.996 00:09:17.996 ' 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:17.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.996 --rc genhtml_branch_coverage=1 00:09:17.996 --rc genhtml_function_coverage=1 00:09:17.996 --rc genhtml_legend=1 00:09:17.996 --rc geninfo_all_blocks=1 00:09:17.996 --rc geninfo_unexecuted_blocks=1 00:09:17.996 00:09:17.996 ' 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:17.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.996 --rc genhtml_branch_coverage=1 00:09:17.996 --rc genhtml_function_coverage=1 00:09:17.996 --rc genhtml_legend=1 00:09:17.996 --rc geninfo_all_blocks=1 00:09:17.996 --rc geninfo_unexecuted_blocks=1 00:09:17.996 00:09:17.996 ' 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:17.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.996 --rc genhtml_branch_coverage=1 00:09:17.996 --rc genhtml_function_coverage=1 00:09:17.996 --rc genhtml_legend=1 00:09:17.996 --rc geninfo_all_blocks=1 00:09:17.996 --rc geninfo_unexecuted_blocks=1 00:09:17.996 00:09:17.996 ' 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:17.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58021 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58021 00:09:17.996 10:37:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58021 ']' 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:17.996 10:37:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:17.996 [2024-10-30 10:37:39.333247] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:17.996 [2024-10-30 10:37:39.333691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58021 ] 00:09:18.255 [2024-10-30 10:37:39.524218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.255 [2024-10-30 10:37:39.679559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.255 [2024-10-30 10:37:39.679577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.256 10:37:40 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.256 10:37:40 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:09:19.256 10:37:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58038 00:09:19.256 10:37:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:19.256 10:37:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:19.516 [ 00:09:19.516 "bdev_malloc_delete", 00:09:19.516 "bdev_malloc_create", 00:09:19.516 "bdev_null_resize", 00:09:19.516 "bdev_null_delete", 00:09:19.516 "bdev_null_create", 00:09:19.516 "bdev_nvme_cuse_unregister", 00:09:19.516 "bdev_nvme_cuse_register", 00:09:19.516 "bdev_opal_new_user", 00:09:19.516 "bdev_opal_set_lock_state", 00:09:19.516 "bdev_opal_delete", 00:09:19.516 "bdev_opal_get_info", 00:09:19.516 "bdev_opal_create", 00:09:19.516 "bdev_nvme_opal_revert", 00:09:19.516 "bdev_nvme_opal_init", 00:09:19.516 "bdev_nvme_send_cmd", 00:09:19.516 "bdev_nvme_set_keys", 00:09:19.516 "bdev_nvme_get_path_iostat", 00:09:19.516 "bdev_nvme_get_mdns_discovery_info", 00:09:19.516 "bdev_nvme_stop_mdns_discovery", 00:09:19.516 "bdev_nvme_start_mdns_discovery", 00:09:19.516 "bdev_nvme_set_multipath_policy", 00:09:19.516 "bdev_nvme_set_preferred_path", 00:09:19.516 "bdev_nvme_get_io_paths", 00:09:19.516 "bdev_nvme_remove_error_injection", 00:09:19.516 "bdev_nvme_add_error_injection", 00:09:19.516 "bdev_nvme_get_discovery_info", 00:09:19.516 "bdev_nvme_stop_discovery", 00:09:19.516 "bdev_nvme_start_discovery", 00:09:19.516 "bdev_nvme_get_controller_health_info", 00:09:19.516 "bdev_nvme_disable_controller", 00:09:19.516 "bdev_nvme_enable_controller", 00:09:19.516 "bdev_nvme_reset_controller", 00:09:19.516 "bdev_nvme_get_transport_statistics", 00:09:19.516 "bdev_nvme_apply_firmware", 00:09:19.516 "bdev_nvme_detach_controller", 00:09:19.516 "bdev_nvme_get_controllers", 00:09:19.516 "bdev_nvme_attach_controller", 00:09:19.516 "bdev_nvme_set_hotplug", 00:09:19.516 "bdev_nvme_set_options", 00:09:19.516 "bdev_passthru_delete", 00:09:19.516 "bdev_passthru_create", 00:09:19.516 "bdev_lvol_set_parent_bdev", 00:09:19.516 "bdev_lvol_set_parent", 00:09:19.516 "bdev_lvol_check_shallow_copy", 00:09:19.516 "bdev_lvol_start_shallow_copy", 00:09:19.516 "bdev_lvol_grow_lvstore", 00:09:19.516 "bdev_lvol_get_lvols", 00:09:19.516 "bdev_lvol_get_lvstores", 00:09:19.516 "bdev_lvol_delete", 00:09:19.516 "bdev_lvol_set_read_only", 00:09:19.516 "bdev_lvol_resize", 00:09:19.516 "bdev_lvol_decouple_parent", 00:09:19.516 "bdev_lvol_inflate", 00:09:19.516 "bdev_lvol_rename", 00:09:19.516 "bdev_lvol_clone_bdev", 00:09:19.516 "bdev_lvol_clone", 00:09:19.516 "bdev_lvol_snapshot", 00:09:19.516 "bdev_lvol_create", 00:09:19.516 "bdev_lvol_delete_lvstore", 00:09:19.516 "bdev_lvol_rename_lvstore", 00:09:19.516 "bdev_lvol_create_lvstore", 00:09:19.516 "bdev_raid_set_options", 00:09:19.516 "bdev_raid_remove_base_bdev", 00:09:19.516 "bdev_raid_add_base_bdev", 00:09:19.516 "bdev_raid_delete", 00:09:19.516 "bdev_raid_create", 00:09:19.516 "bdev_raid_get_bdevs", 00:09:19.516 "bdev_error_inject_error", 00:09:19.516 "bdev_error_delete", 00:09:19.516 "bdev_error_create", 00:09:19.516 "bdev_split_delete", 00:09:19.516 "bdev_split_create", 00:09:19.516 "bdev_delay_delete", 00:09:19.516 "bdev_delay_create", 00:09:19.516 "bdev_delay_update_latency", 00:09:19.516 "bdev_zone_block_delete", 00:09:19.516 "bdev_zone_block_create", 00:09:19.516 "blobfs_create", 00:09:19.516 "blobfs_detect", 00:09:19.516 "blobfs_set_cache_size", 00:09:19.516 "bdev_aio_delete", 00:09:19.516 "bdev_aio_rescan", 00:09:19.516 "bdev_aio_create", 00:09:19.516 "bdev_ftl_set_property", 00:09:19.516 "bdev_ftl_get_properties", 00:09:19.516 "bdev_ftl_get_stats", 00:09:19.516 "bdev_ftl_unmap", 00:09:19.516 "bdev_ftl_unload", 00:09:19.516 "bdev_ftl_delete", 00:09:19.516 "bdev_ftl_load", 00:09:19.516 "bdev_ftl_create", 00:09:19.516 "bdev_virtio_attach_controller", 00:09:19.516 "bdev_virtio_scsi_get_devices", 00:09:19.516 "bdev_virtio_detach_controller", 00:09:19.516 "bdev_virtio_blk_set_hotplug", 00:09:19.516 "bdev_iscsi_delete", 00:09:19.516 "bdev_iscsi_create", 00:09:19.516 "bdev_iscsi_set_options", 00:09:19.516 "accel_error_inject_error", 00:09:19.516 "ioat_scan_accel_module", 00:09:19.516 "dsa_scan_accel_module", 00:09:19.516 "iaa_scan_accel_module", 00:09:19.516 "keyring_file_remove_key", 00:09:19.516 "keyring_file_add_key", 00:09:19.516 "keyring_linux_set_options", 00:09:19.516 "fsdev_aio_delete", 00:09:19.516 "fsdev_aio_create", 00:09:19.516 "iscsi_get_histogram", 00:09:19.516 "iscsi_enable_histogram", 00:09:19.516 "iscsi_set_options", 00:09:19.516 "iscsi_get_auth_groups", 00:09:19.516 "iscsi_auth_group_remove_secret", 00:09:19.516 "iscsi_auth_group_add_secret", 00:09:19.516 "iscsi_delete_auth_group", 00:09:19.516 "iscsi_create_auth_group", 00:09:19.516 "iscsi_set_discovery_auth", 00:09:19.516 "iscsi_get_options", 00:09:19.516 "iscsi_target_node_request_logout", 00:09:19.516 "iscsi_target_node_set_redirect", 00:09:19.516 "iscsi_target_node_set_auth", 00:09:19.516 "iscsi_target_node_add_lun", 00:09:19.516 "iscsi_get_stats", 00:09:19.516 "iscsi_get_connections", 00:09:19.516 "iscsi_portal_group_set_auth", 00:09:19.516 "iscsi_start_portal_group", 00:09:19.516 "iscsi_delete_portal_group", 00:09:19.516 "iscsi_create_portal_group", 00:09:19.516 "iscsi_get_portal_groups", 00:09:19.516 "iscsi_delete_target_node", 00:09:19.516 "iscsi_target_node_remove_pg_ig_maps", 00:09:19.516 "iscsi_target_node_add_pg_ig_maps", 00:09:19.516 "iscsi_create_target_node", 00:09:19.516 "iscsi_get_target_nodes", 00:09:19.516 "iscsi_delete_initiator_group", 00:09:19.516 "iscsi_initiator_group_remove_initiators", 00:09:19.516 "iscsi_initiator_group_add_initiators", 00:09:19.516 "iscsi_create_initiator_group", 00:09:19.516 "iscsi_get_initiator_groups", 00:09:19.516 "nvmf_set_crdt", 00:09:19.516 "nvmf_set_config", 00:09:19.516 "nvmf_set_max_subsystems", 00:09:19.516 "nvmf_stop_mdns_prr", 00:09:19.516 "nvmf_publish_mdns_prr", 00:09:19.516 "nvmf_subsystem_get_listeners", 00:09:19.516 "nvmf_subsystem_get_qpairs", 00:09:19.516 "nvmf_subsystem_get_controllers", 00:09:19.516 "nvmf_get_stats", 00:09:19.516 "nvmf_get_transports", 00:09:19.516 "nvmf_create_transport", 00:09:19.516 "nvmf_get_targets", 00:09:19.516 "nvmf_delete_target", 00:09:19.516 "nvmf_create_target", 00:09:19.516 "nvmf_subsystem_allow_any_host", 00:09:19.516 "nvmf_subsystem_set_keys", 00:09:19.516 "nvmf_subsystem_remove_host", 00:09:19.516 "nvmf_subsystem_add_host", 00:09:19.516 "nvmf_ns_remove_host", 00:09:19.516 "nvmf_ns_add_host", 00:09:19.516 "nvmf_subsystem_remove_ns", 00:09:19.516 "nvmf_subsystem_set_ns_ana_group", 00:09:19.516 "nvmf_subsystem_add_ns", 00:09:19.516 "nvmf_subsystem_listener_set_ana_state", 00:09:19.516 "nvmf_discovery_get_referrals", 00:09:19.516 "nvmf_discovery_remove_referral", 00:09:19.516 "nvmf_discovery_add_referral", 00:09:19.516 "nvmf_subsystem_remove_listener", 00:09:19.516 "nvmf_subsystem_add_listener", 00:09:19.516 "nvmf_delete_subsystem", 00:09:19.516 "nvmf_create_subsystem", 00:09:19.516 "nvmf_get_subsystems", 00:09:19.516 "env_dpdk_get_mem_stats", 00:09:19.516 "nbd_get_disks", 00:09:19.516 "nbd_stop_disk", 00:09:19.516 "nbd_start_disk", 00:09:19.516 "ublk_recover_disk", 00:09:19.516 "ublk_get_disks", 00:09:19.516 "ublk_stop_disk", 00:09:19.516 "ublk_start_disk", 00:09:19.516 "ublk_destroy_target", 00:09:19.516 "ublk_create_target", 00:09:19.516 "virtio_blk_create_transport", 00:09:19.516 "virtio_blk_get_transports", 00:09:19.516 "vhost_controller_set_coalescing", 00:09:19.516 "vhost_get_controllers", 00:09:19.516 "vhost_delete_controller", 00:09:19.516 "vhost_create_blk_controller", 00:09:19.517 "vhost_scsi_controller_remove_target", 00:09:19.517 "vhost_scsi_controller_add_target", 00:09:19.517 "vhost_start_scsi_controller", 00:09:19.517 "vhost_create_scsi_controller", 00:09:19.517 "thread_set_cpumask", 00:09:19.517 "scheduler_set_options", 00:09:19.517 "framework_get_governor", 00:09:19.517 "framework_get_scheduler", 00:09:19.517 "framework_set_scheduler", 00:09:19.517 "framework_get_reactors", 00:09:19.517 "thread_get_io_channels", 00:09:19.517 "thread_get_pollers", 00:09:19.517 "thread_get_stats", 00:09:19.517 "framework_monitor_context_switch", 00:09:19.517 "spdk_kill_instance", 00:09:19.517 "log_enable_timestamps", 00:09:19.517 "log_get_flags", 00:09:19.517 "log_clear_flag", 00:09:19.517 "log_set_flag", 00:09:19.517 "log_get_level", 00:09:19.517 "log_set_level", 00:09:19.517 "log_get_print_level", 00:09:19.517 "log_set_print_level", 00:09:19.517 "framework_enable_cpumask_locks", 00:09:19.517 "framework_disable_cpumask_locks", 00:09:19.517 "framework_wait_init", 00:09:19.517 "framework_start_init", 00:09:19.517 "scsi_get_devices", 00:09:19.517 "bdev_get_histogram", 00:09:19.517 "bdev_enable_histogram", 00:09:19.517 "bdev_set_qos_limit", 00:09:19.517 "bdev_set_qd_sampling_period", 00:09:19.517 "bdev_get_bdevs", 00:09:19.517 "bdev_reset_iostat", 00:09:19.517 "bdev_get_iostat", 00:09:19.517 "bdev_examine", 00:09:19.517 "bdev_wait_for_examine", 00:09:19.517 "bdev_set_options", 00:09:19.517 "accel_get_stats", 00:09:19.517 "accel_set_options", 00:09:19.517 "accel_set_driver", 00:09:19.517 "accel_crypto_key_destroy", 00:09:19.517 "accel_crypto_keys_get", 00:09:19.517 "accel_crypto_key_create", 00:09:19.517 "accel_assign_opc", 00:09:19.517 "accel_get_module_info", 00:09:19.517 "accel_get_opc_assignments", 00:09:19.517 "vmd_rescan", 00:09:19.517 "vmd_remove_device", 00:09:19.517 "vmd_enable", 00:09:19.517 "sock_get_default_impl", 00:09:19.517 "sock_set_default_impl", 00:09:19.517 "sock_impl_set_options", 00:09:19.517 "sock_impl_get_options", 00:09:19.517 "iobuf_get_stats", 00:09:19.517 "iobuf_set_options", 00:09:19.517 "keyring_get_keys", 00:09:19.517 "framework_get_pci_devices", 00:09:19.517 "framework_get_config", 00:09:19.517 "framework_get_subsystems", 00:09:19.517 "fsdev_set_opts", 00:09:19.517 "fsdev_get_opts", 00:09:19.517 "trace_get_info", 00:09:19.517 "trace_get_tpoint_group_mask", 00:09:19.517 "trace_disable_tpoint_group", 00:09:19.517 "trace_enable_tpoint_group", 00:09:19.517 "trace_clear_tpoint_mask", 00:09:19.517 "trace_set_tpoint_mask", 00:09:19.517 "notify_get_notifications", 00:09:19.517 "notify_get_types", 00:09:19.517 "spdk_get_version", 00:09:19.517 "rpc_get_methods" 00:09:19.517 ] 00:09:19.517 10:37:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:19.517 10:37:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:19.517 10:37:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58021 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58021 ']' 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58021 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58021 00:09:19.517 killing process with pid 58021 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58021' 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58021 00:09:19.517 10:37:40 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58021 00:09:22.048 ************************************ 00:09:22.048 END TEST spdkcli_tcp 00:09:22.048 ************************************ 00:09:22.048 00:09:22.048 real 0m4.156s 00:09:22.048 user 0m7.413s 00:09:22.048 sys 0m0.648s 00:09:22.048 10:37:43 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:22.048 10:37:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.048 10:37:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:22.048 10:37:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:22.048 10:37:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:22.048 10:37:43 -- common/autotest_common.sh@10 -- # set +x 00:09:22.048 ************************************ 00:09:22.048 START TEST dpdk_mem_utility 00:09:22.048 ************************************ 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:22.048 * Looking for test storage... 00:09:22.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.048 10:37:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:22.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.048 --rc genhtml_branch_coverage=1 00:09:22.048 --rc genhtml_function_coverage=1 00:09:22.048 --rc genhtml_legend=1 00:09:22.048 --rc geninfo_all_blocks=1 00:09:22.048 --rc geninfo_unexecuted_blocks=1 00:09:22.048 00:09:22.048 ' 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:22.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.048 --rc genhtml_branch_coverage=1 00:09:22.048 --rc genhtml_function_coverage=1 00:09:22.048 --rc genhtml_legend=1 00:09:22.048 --rc geninfo_all_blocks=1 00:09:22.048 --rc geninfo_unexecuted_blocks=1 00:09:22.048 00:09:22.048 ' 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:22.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.048 --rc genhtml_branch_coverage=1 00:09:22.048 --rc genhtml_function_coverage=1 00:09:22.048 --rc genhtml_legend=1 00:09:22.048 --rc geninfo_all_blocks=1 00:09:22.048 --rc geninfo_unexecuted_blocks=1 00:09:22.048 00:09:22.048 ' 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:22.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.048 --rc genhtml_branch_coverage=1 00:09:22.048 --rc genhtml_function_coverage=1 00:09:22.048 --rc genhtml_legend=1 00:09:22.048 --rc geninfo_all_blocks=1 00:09:22.048 --rc geninfo_unexecuted_blocks=1 00:09:22.048 00:09:22.048 ' 00:09:22.048 10:37:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:22.048 10:37:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:22.048 10:37:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58143 00:09:22.048 10:37:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58143 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58143 ']' 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:22.048 10:37:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:22.048 [2024-10-30 10:37:43.500269] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:22.048 [2024-10-30 10:37:43.500682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58143 ] 00:09:22.306 [2024-10-30 10:37:43.685166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.564 [2024-10-30 10:37:43.844337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.500 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:23.500 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:09:23.500 10:37:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:23.500 10:37:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:23.500 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.500 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:23.500 { 00:09:23.500 "filename": "/tmp/spdk_mem_dump.txt" 00:09:23.500 } 00:09:23.500 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.500 10:37:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:23.500 DPDK memory size 816.000000 MiB in 1 heap(s) 00:09:23.500 1 heaps totaling size 816.000000 MiB 00:09:23.500 size: 816.000000 MiB heap id: 0 00:09:23.500 end heaps---------- 00:09:23.500 9 mempools totaling size 595.772034 MiB 00:09:23.500 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:23.500 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:23.500 size: 92.545471 MiB name: bdev_io_58143 00:09:23.500 size: 50.003479 MiB name: msgpool_58143 00:09:23.500 size: 36.509338 MiB name: fsdev_io_58143 00:09:23.500 size: 21.763794 MiB name: PDU_Pool 00:09:23.500 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:23.500 size: 4.133484 MiB name: evtpool_58143 00:09:23.500 size: 0.026123 MiB name: Session_Pool 00:09:23.500 end mempools------- 00:09:23.500 6 memzones totaling size 4.142822 MiB 00:09:23.500 size: 1.000366 MiB name: RG_ring_0_58143 00:09:23.500 size: 1.000366 MiB name: RG_ring_1_58143 00:09:23.500 size: 1.000366 MiB name: RG_ring_4_58143 00:09:23.500 size: 1.000366 MiB name: RG_ring_5_58143 00:09:23.500 size: 0.125366 MiB name: RG_ring_2_58143 00:09:23.500 size: 0.015991 MiB name: RG_ring_3_58143 00:09:23.500 end memzones------- 00:09:23.500 10:37:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:23.500 heap id: 0 total size: 816.000000 MiB number of busy elements: 317 number of free elements: 18 00:09:23.500 list of free elements. size: 16.790894 MiB 00:09:23.500 element at address: 0x200006400000 with size: 1.995972 MiB 00:09:23.500 element at address: 0x20000a600000 with size: 1.995972 MiB 00:09:23.500 element at address: 0x200003e00000 with size: 1.991028 MiB 00:09:23.500 element at address: 0x200018d00040 with size: 0.999939 MiB 00:09:23.500 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:23.500 element at address: 0x200019200000 with size: 0.999084 MiB 00:09:23.500 element at address: 0x200031e00000 with size: 0.994324 MiB 00:09:23.500 element at address: 0x200000400000 with size: 0.992004 MiB 00:09:23.500 element at address: 0x200018a00000 with size: 0.959656 MiB 00:09:23.500 element at address: 0x200019500040 with size: 0.936401 MiB 00:09:23.500 element at address: 0x200000200000 with size: 0.716980 MiB 00:09:23.500 element at address: 0x20001ac00000 with size: 0.561462 MiB 00:09:23.500 element at address: 0x200000c00000 with size: 0.490173 MiB 00:09:23.500 element at address: 0x200018e00000 with size: 0.487976 MiB 00:09:23.500 element at address: 0x200019600000 with size: 0.485413 MiB 00:09:23.500 element at address: 0x200012c00000 with size: 0.443237 MiB 00:09:23.500 element at address: 0x200028000000 with size: 0.390442 MiB 00:09:23.500 element at address: 0x200000800000 with size: 0.350891 MiB 00:09:23.500 list of standard malloc elements. size: 199.288208 MiB 00:09:23.500 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:09:23.500 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:09:23.500 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:09:23.500 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:23.500 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:23.500 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:23.500 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:09:23.500 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:23.500 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:09:23.500 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:09:23.500 element at address: 0x200012bff040 with size: 0.000305 MiB 00:09:23.500 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:09:23.500 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200000cff000 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:09:23.500 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:09:23.500 element at address: 0x200012bff180 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bff280 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bff380 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bff480 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bff580 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bff680 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bff780 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bff880 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bff980 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c71780 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c71880 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c71980 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c72080 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012c72180 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:23.501 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:09:23.501 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200028063f40 with size: 0.000244 MiB 00:09:23.501 element at address: 0x200028064040 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806af80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b080 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b180 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b280 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b380 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b480 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b580 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b680 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b780 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b880 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806b980 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806be80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c080 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c180 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c280 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c380 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c480 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c580 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c680 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c780 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c880 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806c980 with size: 0.000244 MiB 00:09:23.501 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d080 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d180 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d280 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d380 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d480 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d580 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d680 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d780 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d880 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806d980 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806da80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806db80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806de80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806df80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e080 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e180 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e280 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e380 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e480 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e580 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e680 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e780 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e880 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806e980 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f080 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f180 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f280 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f380 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f480 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f580 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f680 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f780 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f880 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806f980 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:09:23.502 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:09:23.502 list of memzone associated elements. size: 599.920898 MiB 00:09:23.502 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:09:23.502 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:23.502 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:09:23.502 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:23.502 element at address: 0x200012df4740 with size: 92.045105 MiB 00:09:23.502 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58143_0 00:09:23.502 element at address: 0x200000dff340 with size: 48.003113 MiB 00:09:23.502 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58143_0 00:09:23.502 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:09:23.502 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58143_0 00:09:23.502 element at address: 0x2000197be900 with size: 20.255615 MiB 00:09:23.502 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:23.502 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:09:23.502 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:23.502 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:09:23.502 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58143_0 00:09:23.502 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:09:23.502 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58143 00:09:23.502 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:23.502 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58143 00:09:23.502 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:23.502 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:23.502 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:09:23.502 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:23.502 element at address: 0x200018afde00 with size: 1.008179 MiB 00:09:23.502 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:23.502 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:09:23.502 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:23.502 element at address: 0x200000cff100 with size: 1.000549 MiB 00:09:23.502 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58143 00:09:23.502 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:09:23.502 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58143 00:09:23.502 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:09:23.502 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58143 00:09:23.502 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:09:23.502 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58143 00:09:23.502 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:09:23.502 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58143 00:09:23.502 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:09:23.502 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58143 00:09:23.502 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:09:23.502 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:23.502 element at address: 0x200012c72280 with size: 0.500549 MiB 00:09:23.502 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:23.502 element at address: 0x20001967c440 with size: 0.250549 MiB 00:09:23.502 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:23.502 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:09:23.502 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58143 00:09:23.502 element at address: 0x20000085df80 with size: 0.125549 MiB 00:09:23.502 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58143 00:09:23.502 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:09:23.502 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:23.502 element at address: 0x200028064140 with size: 0.023804 MiB 00:09:23.502 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:23.502 element at address: 0x200000859d40 with size: 0.016174 MiB 00:09:23.502 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58143 00:09:23.502 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:09:23.502 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:23.502 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:09:23.502 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58143 00:09:23.502 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:09:23.502 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58143 00:09:23.502 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:09:23.502 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58143 00:09:23.502 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:09:23.502 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:23.502 10:37:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:23.502 10:37:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58143 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58143 ']' 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58143 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58143 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58143' 00:09:23.502 killing process with pid 58143 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58143 00:09:23.502 10:37:44 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58143 00:09:26.139 00:09:26.139 real 0m3.951s 00:09:26.139 user 0m4.025s 00:09:26.139 sys 0m0.633s 00:09:26.139 10:37:47 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.139 ************************************ 00:09:26.139 END TEST dpdk_mem_utility 00:09:26.139 ************************************ 00:09:26.139 10:37:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:26.139 10:37:47 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:26.139 10:37:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:26.139 10:37:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.139 10:37:47 -- common/autotest_common.sh@10 -- # set +x 00:09:26.139 ************************************ 00:09:26.139 START TEST event 00:09:26.139 ************************************ 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:26.139 * Looking for test storage... 00:09:26.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1691 -- # lcov --version 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:26.139 10:37:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.139 10:37:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.139 10:37:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.139 10:37:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.139 10:37:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.139 10:37:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.139 10:37:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.139 10:37:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.139 10:37:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.139 10:37:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.139 10:37:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.139 10:37:47 event -- scripts/common.sh@344 -- # case "$op" in 00:09:26.139 10:37:47 event -- scripts/common.sh@345 -- # : 1 00:09:26.139 10:37:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.139 10:37:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.139 10:37:47 event -- scripts/common.sh@365 -- # decimal 1 00:09:26.139 10:37:47 event -- scripts/common.sh@353 -- # local d=1 00:09:26.139 10:37:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.139 10:37:47 event -- scripts/common.sh@355 -- # echo 1 00:09:26.139 10:37:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.139 10:37:47 event -- scripts/common.sh@366 -- # decimal 2 00:09:26.139 10:37:47 event -- scripts/common.sh@353 -- # local d=2 00:09:26.139 10:37:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.139 10:37:47 event -- scripts/common.sh@355 -- # echo 2 00:09:26.139 10:37:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.139 10:37:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.139 10:37:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.139 10:37:47 event -- scripts/common.sh@368 -- # return 0 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:26.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.139 --rc genhtml_branch_coverage=1 00:09:26.139 --rc genhtml_function_coverage=1 00:09:26.139 --rc genhtml_legend=1 00:09:26.139 --rc geninfo_all_blocks=1 00:09:26.139 --rc geninfo_unexecuted_blocks=1 00:09:26.139 00:09:26.139 ' 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:26.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.139 --rc genhtml_branch_coverage=1 00:09:26.139 --rc genhtml_function_coverage=1 00:09:26.139 --rc genhtml_legend=1 00:09:26.139 --rc geninfo_all_blocks=1 00:09:26.139 --rc geninfo_unexecuted_blocks=1 00:09:26.139 00:09:26.139 ' 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:26.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.139 --rc genhtml_branch_coverage=1 00:09:26.139 --rc genhtml_function_coverage=1 00:09:26.139 --rc genhtml_legend=1 00:09:26.139 --rc geninfo_all_blocks=1 00:09:26.139 --rc geninfo_unexecuted_blocks=1 00:09:26.139 00:09:26.139 ' 00:09:26.139 10:37:47 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:26.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.139 --rc genhtml_branch_coverage=1 00:09:26.139 --rc genhtml_function_coverage=1 00:09:26.139 --rc genhtml_legend=1 00:09:26.139 --rc geninfo_all_blocks=1 00:09:26.139 --rc geninfo_unexecuted_blocks=1 00:09:26.139 00:09:26.139 ' 00:09:26.139 10:37:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:26.140 10:37:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:26.140 10:37:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:26.140 10:37:47 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:09:26.140 10:37:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.140 10:37:47 event -- common/autotest_common.sh@10 -- # set +x 00:09:26.140 ************************************ 00:09:26.140 START TEST event_perf 00:09:26.140 ************************************ 00:09:26.140 10:37:47 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:26.140 Running I/O for 1 seconds...[2024-10-30 10:37:47.409259] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:26.140 [2024-10-30 10:37:47.409579] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58251 ] 00:09:26.140 [2024-10-30 10:37:47.595742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.398 [2024-10-30 10:37:47.731071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.398 [2024-10-30 10:37:47.731206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.398 [2024-10-30 10:37:47.732586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.398 [2024-10-30 10:37:47.732611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.775 Running I/O for 1 seconds... 00:09:27.775 lcore 0: 187152 00:09:27.775 lcore 1: 187152 00:09:27.775 lcore 2: 187154 00:09:27.775 lcore 3: 187153 00:09:27.775 done. 00:09:27.775 ************************************ 00:09:27.775 END TEST event_perf 00:09:27.775 ************************************ 00:09:27.775 00:09:27.775 real 0m1.593s 00:09:27.775 user 0m4.353s 00:09:27.775 sys 0m0.117s 00:09:27.775 10:37:48 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.775 10:37:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:27.775 10:37:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:27.775 10:37:48 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:27.775 10:37:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.775 10:37:48 event -- common/autotest_common.sh@10 -- # set +x 00:09:27.775 ************************************ 00:09:27.775 START TEST event_reactor 00:09:27.775 ************************************ 00:09:27.775 10:37:48 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:27.775 [2024-10-30 10:37:49.045566] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:27.775 [2024-10-30 10:37:49.045743] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58286 ] 00:09:27.775 [2024-10-30 10:37:49.228561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.033 [2024-10-30 10:37:49.359226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.407 test_start 00:09:29.407 oneshot 00:09:29.407 tick 100 00:09:29.407 tick 100 00:09:29.407 tick 250 00:09:29.407 tick 100 00:09:29.407 tick 100 00:09:29.407 tick 100 00:09:29.407 tick 250 00:09:29.407 tick 500 00:09:29.407 tick 100 00:09:29.407 tick 100 00:09:29.407 tick 250 00:09:29.407 tick 100 00:09:29.407 tick 100 00:09:29.407 test_end 00:09:29.407 00:09:29.407 real 0m1.596s 00:09:29.407 user 0m1.378s 00:09:29.407 sys 0m0.107s 00:09:29.407 10:37:50 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:29.407 10:37:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:29.407 ************************************ 00:09:29.407 END TEST event_reactor 00:09:29.407 ************************************ 00:09:29.407 10:37:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:29.407 10:37:50 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:29.407 10:37:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:29.407 10:37:50 event -- common/autotest_common.sh@10 -- # set +x 00:09:29.407 ************************************ 00:09:29.407 START TEST event_reactor_perf 00:09:29.407 ************************************ 00:09:29.407 10:37:50 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:29.407 [2024-10-30 10:37:50.697039] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:29.407 [2024-10-30 10:37:50.697267] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58327 ] 00:09:29.665 [2024-10-30 10:37:50.891226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.665 [2024-10-30 10:37:51.019974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.041 test_start 00:09:31.041 test_end 00:09:31.041 Performance: 284389 events per second 00:09:31.041 ************************************ 00:09:31.041 END TEST event_reactor_perf 00:09:31.041 ************************************ 00:09:31.041 00:09:31.041 real 0m1.602s 00:09:31.041 user 0m1.375s 00:09:31.041 sys 0m0.117s 00:09:31.041 10:37:52 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.041 10:37:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:31.041 10:37:52 event -- event/event.sh@49 -- # uname -s 00:09:31.041 10:37:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:31.041 10:37:52 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:31.041 10:37:52 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:31.041 10:37:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.041 10:37:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:31.041 ************************************ 00:09:31.041 START TEST event_scheduler 00:09:31.041 ************************************ 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:31.041 * Looking for test storage... 00:09:31.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.041 10:37:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:31.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.041 --rc genhtml_branch_coverage=1 00:09:31.041 --rc genhtml_function_coverage=1 00:09:31.041 --rc genhtml_legend=1 00:09:31.041 --rc geninfo_all_blocks=1 00:09:31.041 --rc geninfo_unexecuted_blocks=1 00:09:31.041 00:09:31.041 ' 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:31.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.041 --rc genhtml_branch_coverage=1 00:09:31.041 --rc genhtml_function_coverage=1 00:09:31.041 --rc genhtml_legend=1 00:09:31.041 --rc geninfo_all_blocks=1 00:09:31.041 --rc geninfo_unexecuted_blocks=1 00:09:31.041 00:09:31.041 ' 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:31.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.041 --rc genhtml_branch_coverage=1 00:09:31.041 --rc genhtml_function_coverage=1 00:09:31.041 --rc genhtml_legend=1 00:09:31.041 --rc geninfo_all_blocks=1 00:09:31.041 --rc geninfo_unexecuted_blocks=1 00:09:31.041 00:09:31.041 ' 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:31.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.041 --rc genhtml_branch_coverage=1 00:09:31.041 --rc genhtml_function_coverage=1 00:09:31.041 --rc genhtml_legend=1 00:09:31.041 --rc geninfo_all_blocks=1 00:09:31.041 --rc geninfo_unexecuted_blocks=1 00:09:31.041 00:09:31.041 ' 00:09:31.041 10:37:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:31.041 10:37:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58403 00:09:31.041 10:37:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:31.041 10:37:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:31.041 10:37:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58403 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58403 ']' 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.041 10:37:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:31.300 [2024-10-30 10:37:52.590211] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:31.300 [2024-10-30 10:37:52.590401] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58403 ] 00:09:31.558 [2024-10-30 10:37:52.780363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.558 [2024-10-30 10:37:52.947246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.558 [2024-10-30 10:37:52.947365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.558 [2024-10-30 10:37:52.947474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.558 [2024-10-30 10:37:52.947478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.493 10:37:53 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.493 10:37:53 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:09:32.493 10:37:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:32.493 10:37:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.493 10:37:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:32.493 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:32.493 POWER: Cannot set governor of lcore 0 to userspace 00:09:32.493 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:32.493 POWER: Cannot set governor of lcore 0 to performance 00:09:32.493 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:32.493 POWER: Cannot set governor of lcore 0 to userspace 00:09:32.493 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:32.493 POWER: Cannot set governor of lcore 0 to userspace 00:09:32.493 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:32.493 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:32.493 POWER: Unable to set Power Management Environment for lcore 0 00:09:32.493 [2024-10-30 10:37:53.658076] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:09:32.493 [2024-10-30 10:37:53.658106] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:09:32.493 [2024-10-30 10:37:53.658120] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:32.493 [2024-10-30 10:37:53.658146] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:32.493 [2024-10-30 10:37:53.658159] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:32.493 [2024-10-30 10:37:53.658173] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:32.493 10:37:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.493 10:37:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:32.493 10:37:53 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.493 10:37:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 [2024-10-30 10:37:53.980244] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:32.753 10:37:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:32.753 10:37:53 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:32.753 10:37:53 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:32.753 10:37:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 ************************************ 00:09:32.753 START TEST scheduler_create_thread 00:09:32.753 ************************************ 00:09:32.753 10:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:09:32.753 10:37:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:32.753 10:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 2 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 3 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 4 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 5 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 6 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 7 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 8 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 9 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 10 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.753 10:37:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:34.132 ************************************ 00:09:34.132 END TEST scheduler_create_thread 00:09:34.132 ************************************ 00:09:34.132 10:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.132 00:09:34.132 real 0m1.177s 00:09:34.132 user 0m0.020s 00:09:34.132 sys 0m0.001s 00:09:34.132 10:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.132 10:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:34.132 10:37:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:34.132 10:37:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58403 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58403 ']' 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58403 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58403 00:09:34.132 killing process with pid 58403 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58403' 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58403 00:09:34.132 10:37:55 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58403 00:09:34.390 [2024-10-30 10:37:55.648236] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:35.327 ************************************ 00:09:35.327 END TEST event_scheduler 00:09:35.327 ************************************ 00:09:35.327 00:09:35.327 real 0m4.411s 00:09:35.327 user 0m7.933s 00:09:35.327 sys 0m0.526s 00:09:35.327 10:37:56 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.327 10:37:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:35.327 10:37:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:35.327 10:37:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:35.327 10:37:56 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:35.327 10:37:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.327 10:37:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:35.327 ************************************ 00:09:35.327 START TEST app_repeat 00:09:35.327 ************************************ 00:09:35.327 10:37:56 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:35.327 Process app_repeat pid: 58498 00:09:35.327 spdk_app_start Round 0 00:09:35.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58498 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:35.327 10:37:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58498' 00:09:35.328 10:37:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:35.328 10:37:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:35.328 10:37:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58498 /var/tmp/spdk-nbd.sock 00:09:35.328 10:37:56 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58498 ']' 00:09:35.328 10:37:56 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:35.328 10:37:56 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:35.328 10:37:56 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:35.328 10:37:56 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:35.328 10:37:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:35.587 [2024-10-30 10:37:56.848372] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:35.587 [2024-10-30 10:37:56.849075] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58498 ] 00:09:35.587 [2024-10-30 10:37:57.035578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.845 [2024-10-30 10:37:57.169448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.845 [2024-10-30 10:37:57.169461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.782 10:37:57 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:36.782 10:37:57 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:36.782 10:37:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:37.041 Malloc0 00:09:37.041 10:37:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:37.300 Malloc1 00:09:37.300 10:37:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:37.300 10:37:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:37.559 /dev/nbd0 00:09:37.559 10:37:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:37.559 10:37:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:37.559 10:37:58 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:37.559 10:37:58 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:37.559 10:37:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:37.559 10:37:58 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:37.559 10:37:58 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:37.559 10:37:58 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:37.559 10:37:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:37.559 10:37:58 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:37.559 10:37:58 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:37.559 1+0 records in 00:09:37.559 1+0 records out 00:09:37.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048698 s, 8.4 MB/s 00:09:37.559 10:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:37.559 10:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:37.559 10:37:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:37.559 10:37:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:37.559 10:37:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:37.559 10:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:37.559 10:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:37.559 10:37:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:37.818 /dev/nbd1 00:09:38.077 10:37:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:38.077 10:37:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:38.077 10:37:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:38.078 1+0 records in 00:09:38.078 1+0 records out 00:09:38.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428896 s, 9.6 MB/s 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:38.078 10:37:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:38.078 10:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:38.078 10:37:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:38.078 10:37:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:38.078 10:37:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.078 10:37:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:38.337 10:37:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:38.337 { 00:09:38.337 "nbd_device": "/dev/nbd0", 00:09:38.337 "bdev_name": "Malloc0" 00:09:38.337 }, 00:09:38.337 { 00:09:38.337 "nbd_device": "/dev/nbd1", 00:09:38.337 "bdev_name": "Malloc1" 00:09:38.337 } 00:09:38.337 ]' 00:09:38.337 10:37:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:38.337 { 00:09:38.337 "nbd_device": "/dev/nbd0", 00:09:38.337 "bdev_name": "Malloc0" 00:09:38.337 }, 00:09:38.337 { 00:09:38.337 "nbd_device": "/dev/nbd1", 00:09:38.337 "bdev_name": "Malloc1" 00:09:38.337 } 00:09:38.337 ]' 00:09:38.337 10:37:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:38.338 /dev/nbd1' 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:38.338 /dev/nbd1' 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:38.338 256+0 records in 00:09:38.338 256+0 records out 00:09:38.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00675667 s, 155 MB/s 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:38.338 256+0 records in 00:09:38.338 256+0 records out 00:09:38.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311128 s, 33.7 MB/s 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:38.338 256+0 records in 00:09:38.338 256+0 records out 00:09:38.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030485 s, 34.4 MB/s 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:38.338 10:37:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:38.339 10:37:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:38.339 10:37:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:38.339 10:37:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.339 10:37:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:38.339 10:37:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:38.339 10:37:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:38.339 10:37:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.339 10:37:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.657 10:38:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.931 10:38:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:39.190 10:38:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:39.190 10:38:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:39.757 10:38:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:40.692 [2024-10-30 10:38:02.145706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.950 [2024-10-30 10:38:02.271134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.950 [2024-10-30 10:38:02.271146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.209 [2024-10-30 10:38:02.460409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:41.209 [2024-10-30 10:38:02.460511] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:43.133 spdk_app_start Round 1 00:09:43.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:43.133 10:38:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:43.133 10:38:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:43.133 10:38:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58498 /var/tmp/spdk-nbd.sock 00:09:43.133 10:38:04 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58498 ']' 00:09:43.133 10:38:04 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:43.133 10:38:04 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:43.133 10:38:04 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:43.133 10:38:04 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:43.133 10:38:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:43.133 10:38:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.133 10:38:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:43.133 10:38:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:43.391 Malloc0 00:09:43.391 10:38:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:43.649 Malloc1 00:09:43.649 10:38:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:43.649 10:38:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:43.907 /dev/nbd0 00:09:44.165 10:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:44.165 10:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:44.165 1+0 records in 00:09:44.165 1+0 records out 00:09:44.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360753 s, 11.4 MB/s 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:44.165 10:38:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:44.165 10:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.165 10:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:44.165 10:38:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:44.424 /dev/nbd1 00:09:44.424 10:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:44.424 10:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:44.424 1+0 records in 00:09:44.424 1+0 records out 00:09:44.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355816 s, 11.5 MB/s 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:44.424 10:38:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:44.424 10:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:44.424 10:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:44.424 10:38:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:44.424 10:38:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.424 10:38:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:44.682 { 00:09:44.682 "nbd_device": "/dev/nbd0", 00:09:44.682 "bdev_name": "Malloc0" 00:09:44.682 }, 00:09:44.682 { 00:09:44.682 "nbd_device": "/dev/nbd1", 00:09:44.682 "bdev_name": "Malloc1" 00:09:44.682 } 00:09:44.682 ]' 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:44.682 { 00:09:44.682 "nbd_device": "/dev/nbd0", 00:09:44.682 "bdev_name": "Malloc0" 00:09:44.682 }, 00:09:44.682 { 00:09:44.682 "nbd_device": "/dev/nbd1", 00:09:44.682 "bdev_name": "Malloc1" 00:09:44.682 } 00:09:44.682 ]' 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:44.682 /dev/nbd1' 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:44.682 /dev/nbd1' 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:44.682 256+0 records in 00:09:44.682 256+0 records out 00:09:44.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0072165 s, 145 MB/s 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:44.682 256+0 records in 00:09:44.682 256+0 records out 00:09:44.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289329 s, 36.2 MB/s 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:44.682 10:38:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:44.941 256+0 records in 00:09:44.941 256+0 records out 00:09:44.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277801 s, 37.7 MB/s 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.941 10:38:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.199 10:38:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.458 10:38:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:45.717 10:38:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:45.717 10:38:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:45.717 10:38:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:46.283 10:38:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:47.219 [2024-10-30 10:38:08.561681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:47.219 [2024-10-30 10:38:08.682469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.219 [2024-10-30 10:38:08.682474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.478 [2024-10-30 10:38:08.869513] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:47.478 [2024-10-30 10:38:08.869648] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:49.379 spdk_app_start Round 2 00:09:49.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:49.379 10:38:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:49.379 10:38:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:49.379 10:38:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58498 /var/tmp/spdk-nbd.sock 00:09:49.379 10:38:10 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58498 ']' 00:09:49.379 10:38:10 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:49.379 10:38:10 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:49.379 10:38:10 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:49.379 10:38:10 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:49.379 10:38:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:49.379 10:38:10 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:49.379 10:38:10 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:49.379 10:38:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:49.946 Malloc0 00:09:49.946 10:38:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:50.205 Malloc1 00:09:50.205 10:38:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:50.205 10:38:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:50.463 /dev/nbd0 00:09:50.463 10:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:50.463 10:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:50.463 1+0 records in 00:09:50.463 1+0 records out 00:09:50.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364092 s, 11.2 MB/s 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:50.463 10:38:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:50.463 10:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.463 10:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:50.463 10:38:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:50.794 /dev/nbd1 00:09:50.794 10:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:50.794 10:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:50.794 1+0 records in 00:09:50.794 1+0 records out 00:09:50.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412175 s, 9.9 MB/s 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:50.794 10:38:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:09:50.794 10:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.794 10:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:50.794 10:38:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.794 10:38:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.794 10:38:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:51.069 { 00:09:51.069 "nbd_device": "/dev/nbd0", 00:09:51.069 "bdev_name": "Malloc0" 00:09:51.069 }, 00:09:51.069 { 00:09:51.069 "nbd_device": "/dev/nbd1", 00:09:51.069 "bdev_name": "Malloc1" 00:09:51.069 } 00:09:51.069 ]' 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:51.069 { 00:09:51.069 "nbd_device": "/dev/nbd0", 00:09:51.069 "bdev_name": "Malloc0" 00:09:51.069 }, 00:09:51.069 { 00:09:51.069 "nbd_device": "/dev/nbd1", 00:09:51.069 "bdev_name": "Malloc1" 00:09:51.069 } 00:09:51.069 ]' 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:51.069 /dev/nbd1' 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:51.069 /dev/nbd1' 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:51.069 256+0 records in 00:09:51.069 256+0 records out 00:09:51.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677739 s, 155 MB/s 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.069 10:38:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:51.328 256+0 records in 00:09:51.328 256+0 records out 00:09:51.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259611 s, 40.4 MB/s 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:51.328 256+0 records in 00:09:51.328 256+0 records out 00:09:51.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0373309 s, 28.1 MB/s 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.328 10:38:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.587 10:38:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.845 10:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:52.103 10:38:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:52.103 10:38:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:52.670 10:38:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:53.607 [2024-10-30 10:38:15.074821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.865 [2024-10-30 10:38:15.196384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.865 [2024-10-30 10:38:15.196400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.123 [2024-10-30 10:38:15.384385] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:54.123 [2024-10-30 10:38:15.384481] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:56.039 10:38:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58498 /var/tmp/spdk-nbd.sock 00:09:56.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58498 ']' 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:09:56.039 10:38:17 event.app_repeat -- event/event.sh@39 -- # killprocess 58498 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58498 ']' 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58498 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58498 00:09:56.039 killing process with pid 58498 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58498' 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58498 00:09:56.039 10:38:17 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58498 00:09:56.974 spdk_app_start is called in Round 0. 00:09:56.974 Shutdown signal received, stop current app iteration 00:09:56.974 Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 reinitialization... 00:09:56.974 spdk_app_start is called in Round 1. 00:09:56.974 Shutdown signal received, stop current app iteration 00:09:56.974 Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 reinitialization... 00:09:56.974 spdk_app_start is called in Round 2. 00:09:56.974 Shutdown signal received, stop current app iteration 00:09:56.974 Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 reinitialization... 00:09:56.974 spdk_app_start is called in Round 3. 00:09:56.974 Shutdown signal received, stop current app iteration 00:09:56.974 10:38:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:56.974 10:38:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:56.974 00:09:56.974 real 0m21.501s 00:09:56.974 user 0m47.637s 00:09:56.974 sys 0m2.989s 00:09:56.974 10:38:18 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.974 ************************************ 00:09:56.974 10:38:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:56.974 END TEST app_repeat 00:09:56.974 ************************************ 00:09:56.974 10:38:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:56.974 10:38:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:56.974 10:38:18 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.974 10:38:18 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.974 10:38:18 event -- common/autotest_common.sh@10 -- # set +x 00:09:56.974 ************************************ 00:09:56.974 START TEST cpu_locks 00:09:56.974 ************************************ 00:09:56.974 10:38:18 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:56.974 * Looking for test storage... 00:09:56.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:56.974 10:38:18 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:56.974 10:38:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:56.974 10:38:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:09:57.233 10:38:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.233 10:38:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:57.233 10:38:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.233 10:38:18 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:57.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.233 --rc genhtml_branch_coverage=1 00:09:57.233 --rc genhtml_function_coverage=1 00:09:57.233 --rc genhtml_legend=1 00:09:57.233 --rc geninfo_all_blocks=1 00:09:57.233 --rc geninfo_unexecuted_blocks=1 00:09:57.233 00:09:57.233 ' 00:09:57.233 10:38:18 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:57.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.233 --rc genhtml_branch_coverage=1 00:09:57.233 --rc genhtml_function_coverage=1 00:09:57.233 --rc genhtml_legend=1 00:09:57.233 --rc geninfo_all_blocks=1 00:09:57.233 --rc geninfo_unexecuted_blocks=1 00:09:57.234 00:09:57.234 ' 00:09:57.234 10:38:18 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:57.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.234 --rc genhtml_branch_coverage=1 00:09:57.234 --rc genhtml_function_coverage=1 00:09:57.234 --rc genhtml_legend=1 00:09:57.234 --rc geninfo_all_blocks=1 00:09:57.234 --rc geninfo_unexecuted_blocks=1 00:09:57.234 00:09:57.234 ' 00:09:57.234 10:38:18 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:57.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.234 --rc genhtml_branch_coverage=1 00:09:57.234 --rc genhtml_function_coverage=1 00:09:57.234 --rc genhtml_legend=1 00:09:57.234 --rc geninfo_all_blocks=1 00:09:57.234 --rc geninfo_unexecuted_blocks=1 00:09:57.234 00:09:57.234 ' 00:09:57.234 10:38:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:57.234 10:38:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:57.234 10:38:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:57.234 10:38:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:57.234 10:38:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:57.234 10:38:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.234 10:38:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.234 ************************************ 00:09:57.234 START TEST default_locks 00:09:57.234 ************************************ 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58974 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58974 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58974 ']' 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.234 10:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.234 [2024-10-30 10:38:18.682351] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:09:57.234 [2024-10-30 10:38:18.682675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58974 ] 00:09:57.492 [2024-10-30 10:38:18.858255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.751 [2024-10-30 10:38:18.984255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.684 10:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.684 10:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:09:58.684 10:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58974 00:09:58.684 10:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58974 00:09:58.684 10:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58974 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58974 ']' 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58974 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58974 00:09:58.942 killing process with pid 58974 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58974' 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58974 00:09:58.942 10:38:20 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58974 00:10:01.474 10:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58974 00:10:01.474 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:01.474 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58974 00:10:01.474 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:01.474 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.474 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58974 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58974 ']' 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 ERROR: process (pid: 58974) is no longer running 00:10:01.475 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58974) - No such process 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:01.475 00:10:01.475 real 0m3.890s 00:10:01.475 user 0m3.931s 00:10:01.475 sys 0m0.719s 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.475 10:38:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 ************************************ 00:10:01.475 END TEST default_locks 00:10:01.475 ************************************ 00:10:01.475 10:38:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:01.475 10:38:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:01.475 10:38:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:01.475 10:38:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 ************************************ 00:10:01.475 START TEST default_locks_via_rpc 00:10:01.475 ************************************ 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59044 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59044 00:10:01.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59044 ']' 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:01.475 10:38:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.475 [2024-10-30 10:38:22.596925] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:01.475 [2024-10-30 10:38:22.597141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59044 ] 00:10:01.475 [2024-10-30 10:38:22.783476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.475 [2024-10-30 10:38:22.916685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59044 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59044 00:10:02.409 10:38:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59044 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59044 ']' 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59044 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59044 00:10:02.974 killing process with pid 59044 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59044' 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59044 00:10:02.974 10:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59044 00:10:05.504 ************************************ 00:10:05.504 END TEST default_locks_via_rpc 00:10:05.504 ************************************ 00:10:05.504 00:10:05.504 real 0m4.006s 00:10:05.504 user 0m3.999s 00:10:05.504 sys 0m0.763s 00:10:05.504 10:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.504 10:38:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.504 10:38:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:05.504 10:38:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:05.504 10:38:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.504 10:38:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.504 ************************************ 00:10:05.504 START TEST non_locking_app_on_locked_coremask 00:10:05.504 ************************************ 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59118 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59118 /var/tmp/spdk.sock 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59118 ']' 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:05.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:05.504 10:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:05.504 [2024-10-30 10:38:26.655138] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:05.504 [2024-10-30 10:38:26.655351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59118 ] 00:10:05.504 [2024-10-30 10:38:26.840063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.504 [2024-10-30 10:38:26.961037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.440 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.440 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:06.440 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:06.441 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59138 00:10:06.441 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59138 /var/tmp/spdk2.sock 00:10:06.441 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59138 ']' 00:10:06.441 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:06.441 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:06.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:06.441 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:06.441 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:06.441 10:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.441 [2024-10-30 10:38:27.898917] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:06.441 [2024-10-30 10:38:27.899098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59138 ] 00:10:06.699 [2024-10-30 10:38:28.092180] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:06.699 [2024-10-30 10:38:28.092256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.958 [2024-10-30 10:38:28.352329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.500 10:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:09.500 10:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:09.500 10:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59118 00:10:09.500 10:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59118 00:10:09.500 10:38:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59118 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59118 ']' 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59118 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59118 00:10:10.065 killing process with pid 59118 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59118' 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59118 00:10:10.065 10:38:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59118 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59138 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59138 ']' 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59138 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59138 00:10:15.335 killing process with pid 59138 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59138' 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59138 00:10:15.335 10:38:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59138 00:10:16.711 ************************************ 00:10:16.711 END TEST non_locking_app_on_locked_coremask 00:10:16.711 ************************************ 00:10:16.711 00:10:16.711 real 0m11.561s 00:10:16.711 user 0m12.053s 00:10:16.711 sys 0m1.523s 00:10:16.711 10:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:16.711 10:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.711 10:38:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:16.711 10:38:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:16.711 10:38:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:16.711 10:38:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:16.711 ************************************ 00:10:16.711 START TEST locking_app_on_unlocked_coremask 00:10:16.712 ************************************ 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59288 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59288 /var/tmp/spdk.sock 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59288 ']' 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:16.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:16.712 10:38:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.970 [2024-10-30 10:38:38.265728] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:16.970 [2024-10-30 10:38:38.265911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:10:17.229 [2024-10-30 10:38:38.449050] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:17.229 [2024-10-30 10:38:38.449140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.229 [2024-10-30 10:38:38.581192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59309 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59309 /var/tmp/spdk2.sock 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59309 ']' 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:18.256 10:38:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:18.256 [2024-10-30 10:38:39.557721] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:18.256 [2024-10-30 10:38:39.558268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59309 ] 00:10:18.514 [2024-10-30 10:38:39.760106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.772 [2024-10-30 10:38:40.008229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.307 10:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:21.307 10:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:21.307 10:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59309 00:10:21.307 10:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59309 00:10:21.307 10:38:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59288 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59288 ']' 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59288 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59288 00:10:21.873 killing process with pid 59288 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59288' 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59288 00:10:21.873 10:38:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59288 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59309 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59309 ']' 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59309 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59309 00:10:27.139 killing process with pid 59309 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59309' 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59309 00:10:27.139 10:38:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59309 00:10:28.514 ************************************ 00:10:28.514 END TEST locking_app_on_unlocked_coremask 00:10:28.514 ************************************ 00:10:28.514 00:10:28.514 real 0m11.576s 00:10:28.514 user 0m12.040s 00:10:28.514 sys 0m1.605s 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:28.514 10:38:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:28.514 10:38:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:28.514 10:38:49 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.514 10:38:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:28.514 ************************************ 00:10:28.514 START TEST locking_app_on_locked_coremask 00:10:28.514 ************************************ 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:10:28.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59452 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59452 /var/tmp/spdk.sock 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59452 ']' 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:28.514 10:38:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:28.514 [2024-10-30 10:38:49.904584] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:28.514 [2024-10-30 10:38:49.904773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59452 ] 00:10:28.772 [2024-10-30 10:38:50.094804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.772 [2024-10-30 10:38:50.230005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59473 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59473 /var/tmp/spdk2.sock 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59473 /var/tmp/spdk2.sock 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59473 /var/tmp/spdk2.sock 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59473 ']' 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:29.708 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:29.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:29.709 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:29.709 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:29.968 [2024-10-30 10:38:51.183912] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:29.968 [2024-10-30 10:38:51.184398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59473 ] 00:10:29.968 [2024-10-30 10:38:51.379290] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59452 has claimed it. 00:10:29.968 [2024-10-30 10:38:51.379457] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:30.536 ERROR: process (pid: 59473) is no longer running 00:10:30.536 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59473) - No such process 00:10:30.536 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:30.536 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:10:30.536 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:30.536 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:30.536 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:30.536 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:30.536 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59452 00:10:30.536 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59452 00:10:30.536 10:38:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59452 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59452 ']' 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59452 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59452 00:10:31.104 killing process with pid 59452 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59452' 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59452 00:10:31.104 10:38:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59452 00:10:33.637 ************************************ 00:10:33.637 END TEST locking_app_on_locked_coremask 00:10:33.637 ************************************ 00:10:33.637 00:10:33.637 real 0m4.732s 00:10:33.637 user 0m5.078s 00:10:33.637 sys 0m0.919s 00:10:33.637 10:38:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:33.637 10:38:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:33.637 10:38:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:33.637 10:38:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:33.637 10:38:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.637 10:38:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:33.637 ************************************ 00:10:33.637 START TEST locking_overlapped_coremask 00:10:33.637 ************************************ 00:10:33.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59543 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59543 /var/tmp/spdk.sock 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59543 ']' 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:33.637 10:38:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:33.637 [2024-10-30 10:38:54.655952] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:33.637 [2024-10-30 10:38:54.656126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59543 ] 00:10:33.637 [2024-10-30 10:38:54.844551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:33.637 [2024-10-30 10:38:55.005680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.637 [2024-10-30 10:38:55.005785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.637 [2024-10-30 10:38:55.005797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59561 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59561 /var/tmp/spdk2.sock 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59561 /var/tmp/spdk2.sock 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59561 /var/tmp/spdk2.sock 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59561 ']' 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:34.574 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:34.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:34.575 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:34.575 10:38:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:34.575 [2024-10-30 10:38:56.009970] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:34.575 [2024-10-30 10:38:56.010720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59561 ] 00:10:34.834 [2024-10-30 10:38:56.204087] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59543 has claimed it. 00:10:34.834 [2024-10-30 10:38:56.204208] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:35.403 ERROR: process (pid: 59561) is no longer running 00:10:35.403 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59561) - No such process 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59543 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59543 ']' 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59543 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59543 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:35.403 killing process with pid 59543 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59543' 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59543 00:10:35.403 10:38:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59543 00:10:37.936 00:10:37.936 real 0m4.436s 00:10:37.936 user 0m12.092s 00:10:37.936 sys 0m0.686s 00:10:37.936 10:38:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:37.937 10:38:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:37.937 ************************************ 00:10:37.937 END TEST locking_overlapped_coremask 00:10:37.937 ************************************ 00:10:37.937 10:38:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:37.937 10:38:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:37.937 10:38:59 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:37.937 10:38:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:37.937 ************************************ 00:10:37.937 START TEST locking_overlapped_coremask_via_rpc 00:10:37.937 ************************************ 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59625 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59625 /var/tmp/spdk.sock 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59625 ']' 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:37.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:37.937 10:38:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.937 [2024-10-30 10:38:59.157359] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:37.937 [2024-10-30 10:38:59.157549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59625 ] 00:10:37.937 [2024-10-30 10:38:59.335618] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:37.937 [2024-10-30 10:38:59.335697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:38.195 [2024-10-30 10:38:59.470926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.195 [2024-10-30 10:38:59.471052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.195 [2024-10-30 10:38:59.471073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59648 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59648 /var/tmp/spdk2.sock 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59648 ']' 00:10:39.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:39.130 10:39:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.130 [2024-10-30 10:39:00.525667] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:39.130 [2024-10-30 10:39:00.525858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59648 ] 00:10:39.389 [2024-10-30 10:39:00.730800] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:39.389 [2024-10-30 10:39:00.730931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:39.647 [2024-10-30 10:39:00.994714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.647 [2024-10-30 10:39:00.994825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.647 [2024-10-30 10:39:00.994851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.181 [2024-10-30 10:39:03.289227] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59625 has claimed it. 00:10:42.181 request: 00:10:42.181 { 00:10:42.181 "method": "framework_enable_cpumask_locks", 00:10:42.181 "req_id": 1 00:10:42.181 } 00:10:42.181 Got JSON-RPC error response 00:10:42.181 response: 00:10:42.181 { 00:10:42.181 "code": -32603, 00:10:42.181 "message": "Failed to claim CPU core: 2" 00:10:42.181 } 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:42.181 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59625 /var/tmp/spdk.sock 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59625 ']' 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59648 /var/tmp/spdk2.sock 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59648 ']' 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:42.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:42.182 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.750 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:42.750 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:42.750 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:42.750 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:42.750 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:42.750 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:42.750 00:10:42.750 real 0m4.879s 00:10:42.750 user 0m1.825s 00:10:42.750 sys 0m0.268s 00:10:42.750 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:42.750 10:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.750 ************************************ 00:10:42.750 END TEST locking_overlapped_coremask_via_rpc 00:10:42.750 ************************************ 00:10:42.750 10:39:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:42.750 10:39:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59625 ]] 00:10:42.750 10:39:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59625 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59625 ']' 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59625 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59625 00:10:42.750 killing process with pid 59625 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59625' 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59625 00:10:42.750 10:39:03 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59625 00:10:45.285 10:39:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59648 ]] 00:10:45.285 10:39:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59648 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59648 ']' 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59648 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59648 00:10:45.285 killing process with pid 59648 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59648' 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59648 00:10:45.285 10:39:06 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59648 00:10:47.188 10:39:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:47.188 Process with pid 59625 is not found 00:10:47.188 Process with pid 59648 is not found 00:10:47.188 10:39:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:47.188 10:39:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59625 ]] 00:10:47.188 10:39:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59625 00:10:47.188 10:39:08 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59625 ']' 00:10:47.188 10:39:08 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59625 00:10:47.188 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59625) - No such process 00:10:47.188 10:39:08 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59625 is not found' 00:10:47.188 10:39:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59648 ]] 00:10:47.188 10:39:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59648 00:10:47.189 10:39:08 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59648 ']' 00:10:47.189 10:39:08 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59648 00:10:47.189 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59648) - No such process 00:10:47.189 10:39:08 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59648 is not found' 00:10:47.189 10:39:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:47.189 00:10:47.189 real 0m50.158s 00:10:47.189 user 1m27.309s 00:10:47.189 sys 0m7.722s 00:10:47.189 10:39:08 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.189 10:39:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:47.189 ************************************ 00:10:47.189 END TEST cpu_locks 00:10:47.189 ************************************ 00:10:47.189 00:10:47.189 real 1m21.384s 00:10:47.189 user 2m30.192s 00:10:47.189 sys 0m11.868s 00:10:47.189 10:39:08 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.189 10:39:08 event -- common/autotest_common.sh@10 -- # set +x 00:10:47.189 ************************************ 00:10:47.189 END TEST event 00:10:47.189 ************************************ 00:10:47.189 10:39:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:47.189 10:39:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:47.189 10:39:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:47.189 10:39:08 -- common/autotest_common.sh@10 -- # set +x 00:10:47.189 ************************************ 00:10:47.189 START TEST thread 00:10:47.189 ************************************ 00:10:47.189 10:39:08 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:47.448 * Looking for test storage... 00:10:47.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:47.448 10:39:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.448 10:39:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.448 10:39:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.448 10:39:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.448 10:39:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.448 10:39:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.448 10:39:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.448 10:39:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.448 10:39:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.448 10:39:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.448 10:39:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.448 10:39:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:47.448 10:39:08 thread -- scripts/common.sh@345 -- # : 1 00:10:47.448 10:39:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.448 10:39:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.448 10:39:08 thread -- scripts/common.sh@365 -- # decimal 1 00:10:47.448 10:39:08 thread -- scripts/common.sh@353 -- # local d=1 00:10:47.448 10:39:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.448 10:39:08 thread -- scripts/common.sh@355 -- # echo 1 00:10:47.448 10:39:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.448 10:39:08 thread -- scripts/common.sh@366 -- # decimal 2 00:10:47.448 10:39:08 thread -- scripts/common.sh@353 -- # local d=2 00:10:47.448 10:39:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.448 10:39:08 thread -- scripts/common.sh@355 -- # echo 2 00:10:47.448 10:39:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.448 10:39:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.448 10:39:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.448 10:39:08 thread -- scripts/common.sh@368 -- # return 0 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:47.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.448 --rc genhtml_branch_coverage=1 00:10:47.448 --rc genhtml_function_coverage=1 00:10:47.448 --rc genhtml_legend=1 00:10:47.448 --rc geninfo_all_blocks=1 00:10:47.448 --rc geninfo_unexecuted_blocks=1 00:10:47.448 00:10:47.448 ' 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:47.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.448 --rc genhtml_branch_coverage=1 00:10:47.448 --rc genhtml_function_coverage=1 00:10:47.448 --rc genhtml_legend=1 00:10:47.448 --rc geninfo_all_blocks=1 00:10:47.448 --rc geninfo_unexecuted_blocks=1 00:10:47.448 00:10:47.448 ' 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:47.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.448 --rc genhtml_branch_coverage=1 00:10:47.448 --rc genhtml_function_coverage=1 00:10:47.448 --rc genhtml_legend=1 00:10:47.448 --rc geninfo_all_blocks=1 00:10:47.448 --rc geninfo_unexecuted_blocks=1 00:10:47.448 00:10:47.448 ' 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:47.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.448 --rc genhtml_branch_coverage=1 00:10:47.448 --rc genhtml_function_coverage=1 00:10:47.448 --rc genhtml_legend=1 00:10:47.448 --rc geninfo_all_blocks=1 00:10:47.448 --rc geninfo_unexecuted_blocks=1 00:10:47.448 00:10:47.448 ' 00:10:47.448 10:39:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:47.448 10:39:08 thread -- common/autotest_common.sh@10 -- # set +x 00:10:47.448 ************************************ 00:10:47.448 START TEST thread_poller_perf 00:10:47.448 ************************************ 00:10:47.448 10:39:08 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:47.448 [2024-10-30 10:39:08.835039] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:47.448 [2024-10-30 10:39:08.835611] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59849 ] 00:10:47.707 [2024-10-30 10:39:09.016773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.965 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:47.965 [2024-10-30 10:39:09.182871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.352 [2024-10-30T10:39:10.822Z] ====================================== 00:10:49.352 [2024-10-30T10:39:10.822Z] busy:2216272484 (cyc) 00:10:49.352 [2024-10-30T10:39:10.822Z] total_run_count: 297000 00:10:49.352 [2024-10-30T10:39:10.822Z] tsc_hz: 2200000000 (cyc) 00:10:49.352 [2024-10-30T10:39:10.822Z] ====================================== 00:10:49.352 [2024-10-30T10:39:10.822Z] poller_cost: 7462 (cyc), 3391 (nsec) 00:10:49.352 00:10:49.352 real 0m1.627s 00:10:49.352 user 0m1.416s 00:10:49.352 sys 0m0.100s 00:10:49.352 10:39:10 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:49.352 ************************************ 00:10:49.352 END TEST thread_poller_perf 00:10:49.352 ************************************ 00:10:49.352 10:39:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:49.352 10:39:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:49.352 10:39:10 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:10:49.352 10:39:10 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:49.352 10:39:10 thread -- common/autotest_common.sh@10 -- # set +x 00:10:49.352 ************************************ 00:10:49.352 START TEST thread_poller_perf 00:10:49.352 ************************************ 00:10:49.352 10:39:10 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:49.352 [2024-10-30 10:39:10.529484] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:49.352 [2024-10-30 10:39:10.529664] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59880 ] 00:10:49.352 [2024-10-30 10:39:10.714690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.610 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:49.610 [2024-10-30 10:39:10.843735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.984 [2024-10-30T10:39:12.454Z] ====================================== 00:10:50.984 [2024-10-30T10:39:12.454Z] busy:2203870944 (cyc) 00:10:50.984 [2024-10-30T10:39:12.454Z] total_run_count: 3780000 00:10:50.984 [2024-10-30T10:39:12.454Z] tsc_hz: 2200000000 (cyc) 00:10:50.984 [2024-10-30T10:39:12.454Z] ====================================== 00:10:50.984 [2024-10-30T10:39:12.454Z] poller_cost: 583 (cyc), 265 (nsec) 00:10:50.984 00:10:50.984 real 0m1.598s 00:10:50.984 user 0m1.384s 00:10:50.984 sys 0m0.105s 00:10:50.984 10:39:12 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:50.984 10:39:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:50.984 ************************************ 00:10:50.984 END TEST thread_poller_perf 00:10:50.984 ************************************ 00:10:50.984 10:39:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:50.984 ************************************ 00:10:50.984 END TEST thread 00:10:50.984 ************************************ 00:10:50.984 00:10:50.984 real 0m3.529s 00:10:50.984 user 0m2.961s 00:10:50.984 sys 0m0.343s 00:10:50.984 10:39:12 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:50.984 10:39:12 thread -- common/autotest_common.sh@10 -- # set +x 00:10:50.984 10:39:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:50.984 10:39:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:50.984 10:39:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:50.984 10:39:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.984 10:39:12 -- common/autotest_common.sh@10 -- # set +x 00:10:50.984 ************************************ 00:10:50.984 START TEST app_cmdline 00:10:50.984 ************************************ 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:50.984 * Looking for test storage... 00:10:50.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.984 10:39:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:50.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.984 --rc genhtml_branch_coverage=1 00:10:50.984 --rc genhtml_function_coverage=1 00:10:50.984 --rc genhtml_legend=1 00:10:50.984 --rc geninfo_all_blocks=1 00:10:50.984 --rc geninfo_unexecuted_blocks=1 00:10:50.984 00:10:50.984 ' 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.984 --rc genhtml_branch_coverage=1 00:10:50.984 --rc genhtml_function_coverage=1 00:10:50.984 --rc genhtml_legend=1 00:10:50.984 --rc geninfo_all_blocks=1 00:10:50.984 --rc geninfo_unexecuted_blocks=1 00:10:50.984 00:10:50.984 ' 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.984 --rc genhtml_branch_coverage=1 00:10:50.984 --rc genhtml_function_coverage=1 00:10:50.984 --rc genhtml_legend=1 00:10:50.984 --rc geninfo_all_blocks=1 00:10:50.984 --rc geninfo_unexecuted_blocks=1 00:10:50.984 00:10:50.984 ' 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:50.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.984 --rc genhtml_branch_coverage=1 00:10:50.984 --rc genhtml_function_coverage=1 00:10:50.984 --rc genhtml_legend=1 00:10:50.984 --rc geninfo_all_blocks=1 00:10:50.984 --rc geninfo_unexecuted_blocks=1 00:10:50.984 00:10:50.984 ' 00:10:50.984 10:39:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:50.984 10:39:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59969 00:10:50.984 10:39:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59969 00:10:50.984 10:39:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59969 ']' 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:50.984 10:39:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:51.258 [2024-10-30 10:39:12.482321] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:51.258 [2024-10-30 10:39:12.482480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59969 ] 00:10:51.258 [2024-10-30 10:39:12.658547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.517 [2024-10-30 10:39:12.786206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.450 10:39:13 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:52.450 10:39:13 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:10:52.450 10:39:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:52.450 { 00:10:52.450 "version": "SPDK v25.01-pre git sha1 504f4c967", 00:10:52.450 "fields": { 00:10:52.450 "major": 25, 00:10:52.450 "minor": 1, 00:10:52.450 "patch": 0, 00:10:52.450 "suffix": "-pre", 00:10:52.450 "commit": "504f4c967" 00:10:52.450 } 00:10:52.450 } 00:10:52.450 10:39:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:52.450 10:39:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:52.450 10:39:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:52.450 10:39:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:52.450 10:39:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:52.450 10:39:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:52.450 10:39:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:52.450 10:39:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.450 10:39:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.708 10:39:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:52.708 10:39:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:52.708 10:39:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:52.708 10:39:13 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:52.967 request: 00:10:52.967 { 00:10:52.967 "method": "env_dpdk_get_mem_stats", 00:10:52.967 "req_id": 1 00:10:52.967 } 00:10:52.967 Got JSON-RPC error response 00:10:52.967 response: 00:10:52.967 { 00:10:52.967 "code": -32601, 00:10:52.968 "message": "Method not found" 00:10:52.968 } 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:52.968 10:39:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59969 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59969 ']' 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59969 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59969 00:10:52.968 killing process with pid 59969 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59969' 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@971 -- # kill 59969 00:10:52.968 10:39:14 app_cmdline -- common/autotest_common.sh@976 -- # wait 59969 00:10:55.502 ************************************ 00:10:55.502 END TEST app_cmdline 00:10:55.502 ************************************ 00:10:55.502 00:10:55.503 real 0m4.304s 00:10:55.503 user 0m4.715s 00:10:55.503 sys 0m0.667s 00:10:55.503 10:39:16 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.503 10:39:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:55.503 10:39:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:55.503 10:39:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:55.503 10:39:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.503 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:10:55.503 ************************************ 00:10:55.503 START TEST version 00:10:55.503 ************************************ 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:55.503 * Looking for test storage... 00:10:55.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1691 -- # lcov --version 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:55.503 10:39:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.503 10:39:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.503 10:39:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.503 10:39:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.503 10:39:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.503 10:39:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.503 10:39:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.503 10:39:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.503 10:39:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.503 10:39:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.503 10:39:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.503 10:39:16 version -- scripts/common.sh@344 -- # case "$op" in 00:10:55.503 10:39:16 version -- scripts/common.sh@345 -- # : 1 00:10:55.503 10:39:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.503 10:39:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.503 10:39:16 version -- scripts/common.sh@365 -- # decimal 1 00:10:55.503 10:39:16 version -- scripts/common.sh@353 -- # local d=1 00:10:55.503 10:39:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.503 10:39:16 version -- scripts/common.sh@355 -- # echo 1 00:10:55.503 10:39:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.503 10:39:16 version -- scripts/common.sh@366 -- # decimal 2 00:10:55.503 10:39:16 version -- scripts/common.sh@353 -- # local d=2 00:10:55.503 10:39:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.503 10:39:16 version -- scripts/common.sh@355 -- # echo 2 00:10:55.503 10:39:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.503 10:39:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.503 10:39:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.503 10:39:16 version -- scripts/common.sh@368 -- # return 0 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:55.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.503 --rc genhtml_branch_coverage=1 00:10:55.503 --rc genhtml_function_coverage=1 00:10:55.503 --rc genhtml_legend=1 00:10:55.503 --rc geninfo_all_blocks=1 00:10:55.503 --rc geninfo_unexecuted_blocks=1 00:10:55.503 00:10:55.503 ' 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:55.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.503 --rc genhtml_branch_coverage=1 00:10:55.503 --rc genhtml_function_coverage=1 00:10:55.503 --rc genhtml_legend=1 00:10:55.503 --rc geninfo_all_blocks=1 00:10:55.503 --rc geninfo_unexecuted_blocks=1 00:10:55.503 00:10:55.503 ' 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:55.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.503 --rc genhtml_branch_coverage=1 00:10:55.503 --rc genhtml_function_coverage=1 00:10:55.503 --rc genhtml_legend=1 00:10:55.503 --rc geninfo_all_blocks=1 00:10:55.503 --rc geninfo_unexecuted_blocks=1 00:10:55.503 00:10:55.503 ' 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:55.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.503 --rc genhtml_branch_coverage=1 00:10:55.503 --rc genhtml_function_coverage=1 00:10:55.503 --rc genhtml_legend=1 00:10:55.503 --rc geninfo_all_blocks=1 00:10:55.503 --rc geninfo_unexecuted_blocks=1 00:10:55.503 00:10:55.503 ' 00:10:55.503 10:39:16 version -- app/version.sh@17 -- # get_header_version major 00:10:55.503 10:39:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:55.503 10:39:16 version -- app/version.sh@14 -- # cut -f2 00:10:55.503 10:39:16 version -- app/version.sh@14 -- # tr -d '"' 00:10:55.503 10:39:16 version -- app/version.sh@17 -- # major=25 00:10:55.503 10:39:16 version -- app/version.sh@18 -- # get_header_version minor 00:10:55.503 10:39:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:55.503 10:39:16 version -- app/version.sh@14 -- # cut -f2 00:10:55.503 10:39:16 version -- app/version.sh@14 -- # tr -d '"' 00:10:55.503 10:39:16 version -- app/version.sh@18 -- # minor=1 00:10:55.503 10:39:16 version -- app/version.sh@19 -- # get_header_version patch 00:10:55.503 10:39:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:55.503 10:39:16 version -- app/version.sh@14 -- # cut -f2 00:10:55.503 10:39:16 version -- app/version.sh@14 -- # tr -d '"' 00:10:55.503 10:39:16 version -- app/version.sh@19 -- # patch=0 00:10:55.503 10:39:16 version -- app/version.sh@20 -- # get_header_version suffix 00:10:55.503 10:39:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:55.503 10:39:16 version -- app/version.sh@14 -- # cut -f2 00:10:55.503 10:39:16 version -- app/version.sh@14 -- # tr -d '"' 00:10:55.503 10:39:16 version -- app/version.sh@20 -- # suffix=-pre 00:10:55.503 10:39:16 version -- app/version.sh@22 -- # version=25.1 00:10:55.503 10:39:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:55.503 10:39:16 version -- app/version.sh@28 -- # version=25.1rc0 00:10:55.503 10:39:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:55.503 10:39:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:55.503 10:39:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:55.503 10:39:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:55.503 00:10:55.503 real 0m0.311s 00:10:55.503 user 0m0.211s 00:10:55.503 sys 0m0.135s 00:10:55.503 ************************************ 00:10:55.503 END TEST version 00:10:55.503 ************************************ 00:10:55.503 10:39:16 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.503 10:39:16 version -- common/autotest_common.sh@10 -- # set +x 00:10:55.503 10:39:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:55.503 10:39:16 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:10:55.503 10:39:16 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:55.503 10:39:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:55.503 10:39:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.503 10:39:16 -- common/autotest_common.sh@10 -- # set +x 00:10:55.503 ************************************ 00:10:55.503 START TEST bdev_raid 00:10:55.503 ************************************ 00:10:55.503 10:39:16 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:55.504 * Looking for test storage... 00:10:55.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:55.504 10:39:16 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:55.504 10:39:16 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:55.504 10:39:16 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:10:55.763 10:39:17 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@345 -- # : 1 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.763 10:39:17 bdev_raid -- scripts/common.sh@368 -- # return 0 00:10:55.763 10:39:17 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.763 10:39:17 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:55.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.763 --rc genhtml_branch_coverage=1 00:10:55.763 --rc genhtml_function_coverage=1 00:10:55.763 --rc genhtml_legend=1 00:10:55.763 --rc geninfo_all_blocks=1 00:10:55.763 --rc geninfo_unexecuted_blocks=1 00:10:55.763 00:10:55.763 ' 00:10:55.763 10:39:17 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:55.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.763 --rc genhtml_branch_coverage=1 00:10:55.763 --rc genhtml_function_coverage=1 00:10:55.763 --rc genhtml_legend=1 00:10:55.763 --rc geninfo_all_blocks=1 00:10:55.763 --rc geninfo_unexecuted_blocks=1 00:10:55.763 00:10:55.763 ' 00:10:55.763 10:39:17 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:55.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.763 --rc genhtml_branch_coverage=1 00:10:55.763 --rc genhtml_function_coverage=1 00:10:55.763 --rc genhtml_legend=1 00:10:55.763 --rc geninfo_all_blocks=1 00:10:55.763 --rc geninfo_unexecuted_blocks=1 00:10:55.763 00:10:55.763 ' 00:10:55.763 10:39:17 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:55.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.763 --rc genhtml_branch_coverage=1 00:10:55.763 --rc genhtml_function_coverage=1 00:10:55.763 --rc genhtml_legend=1 00:10:55.763 --rc geninfo_all_blocks=1 00:10:55.763 --rc geninfo_unexecuted_blocks=1 00:10:55.763 00:10:55.763 ' 00:10:55.763 10:39:17 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:55.763 10:39:17 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:10:55.763 10:39:17 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:10:55.763 10:39:17 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:10:55.763 10:39:17 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:10:55.763 10:39:17 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:10:55.763 10:39:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:10:55.763 10:39:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:55.763 10:39:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.763 10:39:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.763 ************************************ 00:10:55.763 START TEST raid1_resize_data_offset_test 00:10:55.763 ************************************ 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60157 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60157' 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:55.763 Process raid pid: 60157 00:10:55.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60157 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 60157 ']' 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:55.763 10:39:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.763 [2024-10-30 10:39:17.184778] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:10:55.763 [2024-10-30 10:39:17.185152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.022 [2024-10-30 10:39:17.367922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.280 [2024-10-30 10:39:17.522200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.280 [2024-10-30 10:39:17.746826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.281 [2024-10-30 10:39:17.746876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.850 malloc0 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.850 malloc1 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.850 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.108 null0 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.108 [2024-10-30 10:39:18.328865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:10:57.108 [2024-10-30 10:39:18.331233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:57.108 [2024-10-30 10:39:18.331306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:10:57.108 [2024-10-30 10:39:18.331541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:57.108 [2024-10-30 10:39:18.331565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:10:57.108 [2024-10-30 10:39:18.331918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:57.108 [2024-10-30 10:39:18.332175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:57.108 [2024-10-30 10:39:18.332197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:57.108 [2024-10-30 10:39:18.332395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.108 [2024-10-30 10:39:18.384879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.108 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.676 malloc2 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.676 [2024-10-30 10:39:18.932333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:57.676 [2024-10-30 10:39:18.949612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.676 [2024-10-30 10:39:18.952170] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.676 10:39:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60157 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 60157 ']' 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 60157 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60157 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:57.676 killing process with pid 60157 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60157' 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 60157 00:10:57.676 10:39:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 60157 00:10:57.676 [2024-10-30 10:39:19.041772] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.676 [2024-10-30 10:39:19.043864] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:10:57.676 [2024-10-30 10:39:19.043941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.676 [2024-10-30 10:39:19.043968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:10:57.676 [2024-10-30 10:39:19.075807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.676 [2024-10-30 10:39:19.076242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.676 [2024-10-30 10:39:19.076280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:59.589 [2024-10-30 10:39:20.742506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.525 10:39:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:11:00.525 00:11:00.525 real 0m4.711s 00:11:00.525 user 0m4.650s 00:11:00.525 sys 0m0.624s 00:11:00.525 10:39:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:00.525 10:39:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.525 ************************************ 00:11:00.525 END TEST raid1_resize_data_offset_test 00:11:00.525 ************************************ 00:11:00.525 10:39:21 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:11:00.525 10:39:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:00.525 10:39:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.525 10:39:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.525 ************************************ 00:11:00.525 START TEST raid0_resize_superblock_test 00:11:00.525 ************************************ 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60240 00:11:00.525 Process raid pid: 60240 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60240' 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60240 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60240 ']' 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:00.525 10:39:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.525 [2024-10-30 10:39:21.962224] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:00.525 [2024-10-30 10:39:21.962417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.784 [2024-10-30 10:39:22.150718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.043 [2024-10-30 10:39:22.282534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.043 [2024-10-30 10:39:22.494558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.043 [2024-10-30 10:39:22.494665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.651 10:39:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:01.651 10:39:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:01.651 10:39:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:11:01.651 10:39:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.651 10:39:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.220 malloc0 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.220 [2024-10-30 10:39:23.535369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:02.220 [2024-10-30 10:39:23.535512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.220 [2024-10-30 10:39:23.535546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:02.220 [2024-10-30 10:39:23.535568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.220 [2024-10-30 10:39:23.538447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.220 [2024-10-30 10:39:23.538507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:02.220 pt0 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.220 99b47006-1859-4dd0-85b4-b965f3cc4310 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.220 cf619044-15e4-4a07-9387-ebf5b085179e 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.220 fc176ac9-cf27-4222-b27b-5e339354a826 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.220 [2024-10-30 10:39:23.681959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cf619044-15e4-4a07-9387-ebf5b085179e is claimed 00:11:02.220 [2024-10-30 10:39:23.682123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fc176ac9-cf27-4222-b27b-5e339354a826 is claimed 00:11:02.220 [2024-10-30 10:39:23.682340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:02.220 [2024-10-30 10:39:23.682370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:11:02.220 [2024-10-30 10:39:23.682739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:02.220 [2024-10-30 10:39:23.683027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:02.220 [2024-10-30 10:39:23.683044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:02.220 [2024-10-30 10:39:23.683254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.220 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.479 [2024-10-30 10:39:23.794293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.479 [2024-10-30 10:39:23.842263] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:02.479 [2024-10-30 10:39:23.842304] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cf619044-15e4-4a07-9387-ebf5b085179e' was resized: old size 131072, new size 204800 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.479 [2024-10-30 10:39:23.850084] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:02.479 [2024-10-30 10:39:23.850117] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fc176ac9-cf27-4222-b27b-5e339354a826' was resized: old size 131072, new size 204800 00:11:02.479 [2024-10-30 10:39:23.850152] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.479 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.480 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.739 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:11:02.739 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:02.739 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:02.739 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.739 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:02.739 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:11:02.739 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.740 [2024-10-30 10:39:23.958335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.740 10:39:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.740 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:02.740 10:39:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.740 [2024-10-30 10:39:24.010059] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:02.740 [2024-10-30 10:39:24.010168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:02.740 [2024-10-30 10:39:24.010188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.740 [2024-10-30 10:39:24.010216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:02.740 [2024-10-30 10:39:24.010361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.740 [2024-10-30 10:39:24.010426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.740 [2024-10-30 10:39:24.010446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.740 [2024-10-30 10:39:24.017901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:02.740 [2024-10-30 10:39:24.017969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.740 [2024-10-30 10:39:24.018013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:02.740 [2024-10-30 10:39:24.018032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.740 [2024-10-30 10:39:24.020903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.740 [2024-10-30 10:39:24.020982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:02.740 pt0 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:02.740 [2024-10-30 10:39:24.023330] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cf619044-15e4-4a07-9387-ebf5b085179e 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.740 [2024-10-30 10:39:24.023399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cf619044-15e4-4a07-9387-ebf5b085179e is claimed 00:11:02.740 [2024-10-30 10:39:24.023545] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fc176ac9-cf27-4222-b27b-5e339354a826 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.740 [2024-10-30 10:39:24.023580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fc176ac9-cf27-4222-b27b-5e339354a826 is claimed 00:11:02.740 [2024-10-30 10:39:24.023746] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev fc176ac9-cf27-4222-b27b-5e339354a826 (2) smaller than existing raid bdev Raid (3) 00:11:02.740 [2024-10-30 10:39:24.023780] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev cf619044-15e4-4a07-9387-ebf5b085179e: File exists 00:11:02.740 [2024-10-30 10:39:24.023836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:02.740 [2024-10-30 10:39:24.023860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:11:02.740 [2024-10-30 10:39:24.024190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:02.740 [2024-10-30 10:39:24.024386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:02.740 [2024-10-30 10:39:24.024400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:11:02.740 [2024-10-30 10:39:24.024583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.740 [2024-10-30 10:39:24.038364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60240 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60240 ']' 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60240 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60240 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:02.740 killing process with pid 60240 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60240' 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60240 00:11:02.740 [2024-10-30 10:39:24.116107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.740 [2024-10-30 10:39:24.116181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.740 10:39:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60240 00:11:02.740 [2024-10-30 10:39:24.116248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.740 [2024-10-30 10:39:24.116262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:04.118 [2024-10-30 10:39:25.484760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.497 10:39:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:05.497 00:11:05.497 real 0m4.703s 00:11:05.497 user 0m5.084s 00:11:05.497 sys 0m0.612s 00:11:05.497 10:39:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:05.497 10:39:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.497 ************************************ 00:11:05.497 END TEST raid0_resize_superblock_test 00:11:05.497 ************************************ 00:11:05.497 10:39:26 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:11:05.497 10:39:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:05.497 10:39:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:05.497 10:39:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.497 ************************************ 00:11:05.497 START TEST raid1_resize_superblock_test 00:11:05.497 ************************************ 00:11:05.497 Process raid pid: 60343 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60343 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60343' 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60343 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60343 ']' 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:05.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:05.497 10:39:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.497 [2024-10-30 10:39:26.717527] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:05.497 [2024-10-30 10:39:26.717705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.497 [2024-10-30 10:39:26.910873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.756 [2024-10-30 10:39:27.064482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.014 [2024-10-30 10:39:27.284619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.014 [2024-10-30 10:39:27.284695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.274 10:39:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:06.274 10:39:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:06.274 10:39:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:11:06.274 10:39:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.274 10:39:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.841 malloc0 00:11:06.841 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.841 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:06.841 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.841 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.842 [2024-10-30 10:39:28.253284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:06.842 [2024-10-30 10:39:28.253380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.842 [2024-10-30 10:39:28.253412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:06.842 [2024-10-30 10:39:28.253446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.842 [2024-10-30 10:39:28.256226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.842 [2024-10-30 10:39:28.256292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:06.842 pt0 00:11:06.842 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.842 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:11:06.842 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.842 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.100 48a281d1-1e6f-47bb-b208-9b2e1d6614d1 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.100 a0851985-cdec-4ea1-8aaa-b88a567d0a3e 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.100 c5f2b052-9b48-4577-b0e0-ada22996060d 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.100 [2024-10-30 10:39:28.399570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a0851985-cdec-4ea1-8aaa-b88a567d0a3e is claimed 00:11:07.100 [2024-10-30 10:39:28.399757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c5f2b052-9b48-4577-b0e0-ada22996060d is claimed 00:11:07.100 [2024-10-30 10:39:28.400009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:07.100 [2024-10-30 10:39:28.400047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:11:07.100 [2024-10-30 10:39:28.400408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:07.100 [2024-10-30 10:39:28.400714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:07.100 [2024-10-30 10:39:28.400742] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:07.100 [2024-10-30 10:39:28.400956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.100 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:11:07.101 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:07.101 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:07.101 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.101 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.101 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:07.101 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:11:07.101 [2024-10-30 10:39:28.519937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.101 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.360 [2024-10-30 10:39:28.599987] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:07.360 [2024-10-30 10:39:28.600028] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a0851985-cdec-4ea1-8aaa-b88a567d0a3e' was resized: old size 131072, new size 204800 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.360 [2024-10-30 10:39:28.607779] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:07.360 [2024-10-30 10:39:28.607810] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c5f2b052-9b48-4577-b0e0-ada22996060d' was resized: old size 131072, new size 204800 00:11:07.360 [2024-10-30 10:39:28.607854] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.360 [2024-10-30 10:39:28.728016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.360 [2024-10-30 10:39:28.795728] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:07.360 [2024-10-30 10:39:28.795834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:07.360 [2024-10-30 10:39:28.795875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:07.360 [2024-10-30 10:39:28.796098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.360 [2024-10-30 10:39:28.796385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.360 [2024-10-30 10:39:28.796570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.360 [2024-10-30 10:39:28.796604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.360 [2024-10-30 10:39:28.803604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:07.360 [2024-10-30 10:39:28.803815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.360 [2024-10-30 10:39:28.803854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:07.360 [2024-10-30 10:39:28.803876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.360 [2024-10-30 10:39:28.806802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.360 [2024-10-30 10:39:28.806969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:07.360 pt0 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.360 [2024-10-30 10:39:28.809333] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a0851985-cdec-4ea1-8aaa-b88a567d0a3e 00:11:07.360 [2024-10-30 10:39:28.809416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a0851985-cdec-4ea1-8aaa-b88a567d0a3e is claimed 00:11:07.360 [2024-10-30 10:39:28.809560] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c5f2b052-9b48-4577-b0e0-ada22996060d 00:11:07.360 [2024-10-30 10:39:28.809596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c5f2b052-9b48-4577-b0e0-ada22996060d is claimed 00:11:07.360 [2024-10-30 10:39:28.809748] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c5f2b052-9b48-4577-b0e0-ada22996060d (2) smaller than existing raid bdev Raid (3) 00:11:07.360 [2024-10-30 10:39:28.809800] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a0851985-cdec-4ea1-8aaa-b88a567d0a3e: File exists 00:11:07.360 [2024-10-30 10:39:28.809861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:07.360 [2024-10-30 10:39:28.809881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:07.360 [2024-10-30 10:39:28.810340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:07.360 [2024-10-30 10:39:28.810682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:07.360 [2024-10-30 10:39:28.810807] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:11:07.360 [2024-10-30 10:39:28.811050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.360 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:07.361 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:07.361 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:07.361 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:11:07.361 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.361 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.361 [2024-10-30 10:39:28.827953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60343 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60343 ']' 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60343 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60343 00:11:07.620 killing process with pid 60343 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60343' 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60343 00:11:07.620 [2024-10-30 10:39:28.910598] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.620 10:39:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60343 00:11:07.620 [2024-10-30 10:39:28.910708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.620 [2024-10-30 10:39:28.910784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.620 [2024-10-30 10:39:28.910799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:08.994 [2024-10-30 10:39:30.208559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.929 ************************************ 00:11:09.929 END TEST raid1_resize_superblock_test 00:11:09.929 ************************************ 00:11:09.929 10:39:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:09.929 00:11:09.929 real 0m4.643s 00:11:09.929 user 0m4.961s 00:11:09.929 sys 0m0.678s 00:11:09.929 10:39:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.929 10:39:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.929 10:39:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:11:09.929 10:39:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:11:09.929 10:39:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:11:09.929 10:39:31 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:11:09.929 10:39:31 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:11:09.929 10:39:31 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:11:09.929 10:39:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:09.929 10:39:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.929 10:39:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.929 ************************************ 00:11:09.929 START TEST raid_function_test_raid0 00:11:09.929 ************************************ 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:09.929 Process raid pid: 60447 00:11:09.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60447 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60447' 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60447 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60447 ']' 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:09.929 10:39:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:10.187 [2024-10-30 10:39:31.437287] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:10.187 [2024-10-30 10:39:31.437962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.187 [2024-10-30 10:39:31.623998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.459 [2024-10-30 10:39:31.760246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.728 [2024-10-30 10:39:31.973503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.728 [2024-10-30 10:39:31.973693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.986 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:10.986 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:11:10.986 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:10.986 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.986 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:11.244 Base_1 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:11.244 Base_2 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:11.244 [2024-10-30 10:39:32.511152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:11.244 [2024-10-30 10:39:32.513706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:11.244 [2024-10-30 10:39:32.513816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:11.244 [2024-10-30 10:39:32.513843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:11.244 [2024-10-30 10:39:32.514199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:11.244 [2024-10-30 10:39:32.514390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:11.244 [2024-10-30 10:39:32.514406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:11.244 [2024-10-30 10:39:32.514597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:11.244 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:11.245 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:11.245 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:11.245 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:11.245 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:11:11.245 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:11.245 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:11.245 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:11.502 [2024-10-30 10:39:32.819272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:11.502 /dev/nbd0 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.502 1+0 records in 00:11:11.502 1+0 records out 00:11:11.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051488 s, 8.0 MB/s 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:11.502 10:39:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:11.760 { 00:11:11.760 "nbd_device": "/dev/nbd0", 00:11:11.760 "bdev_name": "raid" 00:11:11.760 } 00:11:11.760 ]' 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:11.760 { 00:11:11.760 "nbd_device": "/dev/nbd0", 00:11:11.760 "bdev_name": "raid" 00:11:11.760 } 00:11:11.760 ]' 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:11.760 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:12.018 4096+0 records in 00:11:12.018 4096+0 records out 00:11:12.018 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0395863 s, 53.0 MB/s 00:11:12.018 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:12.276 4096+0 records in 00:11:12.276 4096+0 records out 00:11:12.276 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.334531 s, 6.3 MB/s 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:12.276 128+0 records in 00:11:12.276 128+0 records out 00:11:12.276 65536 bytes (66 kB, 64 KiB) copied, 0.000760455 s, 86.2 MB/s 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:12.276 2035+0 records in 00:11:12.276 2035+0 records out 00:11:12.276 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0107295 s, 97.1 MB/s 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:12.276 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:12.277 456+0 records in 00:11:12.277 456+0 records out 00:11:12.277 233472 bytes (233 kB, 228 KiB) copied, 0.00274551 s, 85.0 MB/s 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.277 10:39:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:12.844 [2024-10-30 10:39:34.008104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:12.844 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60447 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60447 ']' 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60447 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60447 00:11:13.103 killing process with pid 60447 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60447' 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60447 00:11:13.103 [2024-10-30 10:39:34.381223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.103 10:39:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60447 00:11:13.103 [2024-10-30 10:39:34.381362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.103 [2024-10-30 10:39:34.381442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.103 [2024-10-30 10:39:34.381467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:13.103 [2024-10-30 10:39:34.570455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.481 ************************************ 00:11:14.481 END TEST raid_function_test_raid0 00:11:14.481 ************************************ 00:11:14.481 10:39:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:11:14.481 00:11:14.481 real 0m4.269s 00:11:14.481 user 0m5.230s 00:11:14.481 sys 0m1.018s 00:11:14.481 10:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.481 10:39:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:14.481 10:39:35 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:11:14.481 10:39:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:14.481 10:39:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.481 10:39:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.481 ************************************ 00:11:14.481 START TEST raid_function_test_concat 00:11:14.481 ************************************ 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60576 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60576' 00:11:14.481 Process raid pid: 60576 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60576 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60576 ']' 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:14.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:14.481 10:39:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:14.481 [2024-10-30 10:39:35.730602] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:14.481 [2024-10-30 10:39:35.730758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.482 [2024-10-30 10:39:35.912394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.740 [2024-10-30 10:39:36.070070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.999 [2024-10-30 10:39:36.295104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.999 [2024-10-30 10:39:36.295183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:15.567 Base_1 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:15.567 Base_2 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:15.567 [2024-10-30 10:39:36.827575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:15.567 [2024-10-30 10:39:36.830090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:15.567 [2024-10-30 10:39:36.830202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:15.567 [2024-10-30 10:39:36.830224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:15.567 [2024-10-30 10:39:36.830581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:15.567 [2024-10-30 10:39:36.830785] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:15.567 [2024-10-30 10:39:36.830801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:15.567 [2024-10-30 10:39:36.831031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.567 10:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:15.568 10:39:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:15.826 [2024-10-30 10:39:37.135687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:15.826 /dev/nbd0 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.826 1+0 records in 00:11:15.826 1+0 records out 00:11:15.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312886 s, 13.1 MB/s 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.826 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:16.085 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:16.085 { 00:11:16.085 "nbd_device": "/dev/nbd0", 00:11:16.085 "bdev_name": "raid" 00:11:16.085 } 00:11:16.085 ]' 00:11:16.085 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:16.085 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:16.085 { 00:11:16.085 "nbd_device": "/dev/nbd0", 00:11:16.085 "bdev_name": "raid" 00:11:16.085 } 00:11:16.085 ]' 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:16.344 4096+0 records in 00:11:16.344 4096+0 records out 00:11:16.344 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0414303 s, 50.6 MB/s 00:11:16.344 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:16.604 4096+0 records in 00:11:16.604 4096+0 records out 00:11:16.604 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.342024 s, 6.1 MB/s 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:16.604 128+0 records in 00:11:16.604 128+0 records out 00:11:16.604 65536 bytes (66 kB, 64 KiB) copied, 0.00107752 s, 60.8 MB/s 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:16.604 10:39:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:16.604 2035+0 records in 00:11:16.604 2035+0 records out 00:11:16.604 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00939875 s, 111 MB/s 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:16.604 456+0 records in 00:11:16.604 456+0 records out 00:11:16.604 233472 bytes (233 kB, 228 KiB) copied, 0.00208782 s, 112 MB/s 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:16.604 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:16.605 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:16.605 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:11:16.605 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.605 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:17.259 [2024-10-30 10:39:38.388104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60576 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60576 ']' 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60576 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:17.259 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60576 00:11:17.517 killing process with pid 60576 00:11:17.517 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:17.517 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:17.517 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60576' 00:11:17.517 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60576 00:11:17.517 [2024-10-30 10:39:38.740749] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.517 10:39:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60576 00:11:17.517 [2024-10-30 10:39:38.740878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.517 [2024-10-30 10:39:38.740951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.517 [2024-10-30 10:39:38.740989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:17.517 [2024-10-30 10:39:38.927369] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.888 10:39:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:11:18.888 00:11:18.888 real 0m4.310s 00:11:18.888 user 0m5.311s 00:11:18.888 sys 0m0.999s 00:11:18.888 10:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.888 10:39:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:18.888 ************************************ 00:11:18.888 END TEST raid_function_test_concat 00:11:18.888 ************************************ 00:11:18.888 10:39:39 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:11:18.888 10:39:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:18.888 10:39:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.888 10:39:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.888 ************************************ 00:11:18.888 START TEST raid0_resize_test 00:11:18.888 ************************************ 00:11:18.888 10:39:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:11:18.889 Process raid pid: 60704 00:11:18.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.889 10:39:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:11:18.889 10:39:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:18.889 10:39:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60704 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60704' 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60704 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60704 ']' 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:18.889 10:39:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.889 [2024-10-30 10:39:40.118142] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:18.889 [2024-10-30 10:39:40.118580] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.889 [2024-10-30 10:39:40.303908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.151 [2024-10-30 10:39:40.436697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.409 [2024-10-30 10:39:40.643702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.409 [2024-10-30 10:39:40.643962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.667 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:19.667 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:11:19.667 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:19.667 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.667 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.667 Base_1 00:11:19.667 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.667 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:19.667 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.667 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.925 Base_2 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.925 [2024-10-30 10:39:41.149801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:19.925 [2024-10-30 10:39:41.152196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:19.925 [2024-10-30 10:39:41.153286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:19.925 [2024-10-30 10:39:41.153360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:19.925 [2024-10-30 10:39:41.154065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:19.925 [2024-10-30 10:39:41.154401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:19.925 [2024-10-30 10:39:41.154445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:19.925 [2024-10-30 10:39:41.154915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.925 [2024-10-30 10:39:41.158807] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:19.925 [2024-10-30 10:39:41.158874] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:19.925 true 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.925 [2024-10-30 10:39:41.170856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.925 [2024-10-30 10:39:41.222647] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:19.925 [2024-10-30 10:39:41.222676] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:19.925 [2024-10-30 10:39:41.222715] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:11:19.925 true 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.925 [2024-10-30 10:39:41.234863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60704 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60704 ']' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60704 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60704 00:11:19.925 killing process with pid 60704 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60704' 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60704 00:11:19.925 [2024-10-30 10:39:41.309773] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.925 10:39:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60704 00:11:19.925 [2024-10-30 10:39:41.309872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.925 [2024-10-30 10:39:41.309937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.925 [2024-10-30 10:39:41.309951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:19.925 [2024-10-30 10:39:41.325298] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.298 ************************************ 00:11:21.298 END TEST raid0_resize_test 00:11:21.298 ************************************ 00:11:21.298 10:39:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:21.298 00:11:21.298 real 0m2.338s 00:11:21.298 user 0m2.611s 00:11:21.298 sys 0m0.392s 00:11:21.298 10:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:21.298 10:39:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.298 10:39:42 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:11:21.298 10:39:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:21.298 10:39:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:21.298 10:39:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.298 ************************************ 00:11:21.298 START TEST raid1_resize_test 00:11:21.298 ************************************ 00:11:21.298 10:39:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:21.299 Process raid pid: 60760 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60760 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60760' 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60760 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60760 ']' 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:21.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:21.299 10:39:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.299 [2024-10-30 10:39:42.482039] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:21.299 [2024-10-30 10:39:42.482205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.299 [2024-10-30 10:39:42.663815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.557 [2024-10-30 10:39:42.816234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.815 [2024-10-30 10:39:43.038480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.815 [2024-10-30 10:39:43.038731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.073 Base_1 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.073 Base_2 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.073 [2024-10-30 10:39:43.519967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:22.073 [2024-10-30 10:39:43.522404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:22.073 [2024-10-30 10:39:43.522632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:22.073 [2024-10-30 10:39:43.522659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:22.073 [2024-10-30 10:39:43.523044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:22.073 [2024-10-30 10:39:43.523232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:22.073 [2024-10-30 10:39:43.523249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:22.073 [2024-10-30 10:39:43.523453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.073 [2024-10-30 10:39:43.527947] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:22.073 [2024-10-30 10:39:43.528014] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:22.073 true 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:22.073 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:22.074 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.074 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.074 [2024-10-30 10:39:43.540189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.331 [2024-10-30 10:39:43.591996] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:22.331 [2024-10-30 10:39:43.592043] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:22.331 [2024-10-30 10:39:43.592091] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:11:22.331 true 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.331 [2024-10-30 10:39:43.604208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60760 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60760 ']' 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60760 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60760 00:11:22.331 killing process with pid 60760 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60760' 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60760 00:11:22.331 [2024-10-30 10:39:43.676949] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.331 [2024-10-30 10:39:43.677068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.331 10:39:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60760 00:11:22.331 [2024-10-30 10:39:43.677650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.331 [2024-10-30 10:39:43.677681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:22.331 [2024-10-30 10:39:43.692407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.267 ************************************ 00:11:23.267 END TEST raid1_resize_test 00:11:23.267 ************************************ 00:11:23.267 10:39:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:23.267 00:11:23.267 real 0m2.320s 00:11:23.267 user 0m2.613s 00:11:23.267 sys 0m0.352s 00:11:23.267 10:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.267 10:39:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.528 10:39:44 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:23.528 10:39:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:23.528 10:39:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:23.528 10:39:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:23.528 10:39:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.528 10:39:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.528 ************************************ 00:11:23.528 START TEST raid_state_function_test 00:11:23.528 ************************************ 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:23.528 Process raid pid: 60828 00:11:23.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60828 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60828' 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60828 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60828 ']' 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.528 10:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.529 [2024-10-30 10:39:44.869101] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:23.529 [2024-10-30 10:39:44.869503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.786 [2024-10-30 10:39:45.059251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.786 [2024-10-30 10:39:45.213119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.044 [2024-10-30 10:39:45.417622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.044 [2024-10-30 10:39:45.417670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.611 [2024-10-30 10:39:45.913704] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.611 [2024-10-30 10:39:45.913769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.611 [2024-10-30 10:39:45.913787] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.611 [2024-10-30 10:39:45.913803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.611 "name": "Existed_Raid", 00:11:24.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.611 "strip_size_kb": 64, 00:11:24.611 "state": "configuring", 00:11:24.611 "raid_level": "raid0", 00:11:24.611 "superblock": false, 00:11:24.611 "num_base_bdevs": 2, 00:11:24.611 "num_base_bdevs_discovered": 0, 00:11:24.611 "num_base_bdevs_operational": 2, 00:11:24.611 "base_bdevs_list": [ 00:11:24.611 { 00:11:24.611 "name": "BaseBdev1", 00:11:24.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.611 "is_configured": false, 00:11:24.611 "data_offset": 0, 00:11:24.611 "data_size": 0 00:11:24.611 }, 00:11:24.611 { 00:11:24.611 "name": "BaseBdev2", 00:11:24.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.611 "is_configured": false, 00:11:24.611 "data_offset": 0, 00:11:24.611 "data_size": 0 00:11:24.611 } 00:11:24.611 ] 00:11:24.611 }' 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.611 10:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.206 [2024-10-30 10:39:46.433795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.206 [2024-10-30 10:39:46.433836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.206 [2024-10-30 10:39:46.441767] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.206 [2024-10-30 10:39:46.441821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.206 [2024-10-30 10:39:46.441835] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.206 [2024-10-30 10:39:46.441853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.206 [2024-10-30 10:39:46.485827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.206 BaseBdev1 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.206 [ 00:11:25.206 { 00:11:25.206 "name": "BaseBdev1", 00:11:25.206 "aliases": [ 00:11:25.206 "4262f6ad-91d8-42bc-8b80-ea405d14f1cb" 00:11:25.206 ], 00:11:25.206 "product_name": "Malloc disk", 00:11:25.206 "block_size": 512, 00:11:25.206 "num_blocks": 65536, 00:11:25.206 "uuid": "4262f6ad-91d8-42bc-8b80-ea405d14f1cb", 00:11:25.206 "assigned_rate_limits": { 00:11:25.206 "rw_ios_per_sec": 0, 00:11:25.206 "rw_mbytes_per_sec": 0, 00:11:25.206 "r_mbytes_per_sec": 0, 00:11:25.206 "w_mbytes_per_sec": 0 00:11:25.206 }, 00:11:25.206 "claimed": true, 00:11:25.206 "claim_type": "exclusive_write", 00:11:25.206 "zoned": false, 00:11:25.206 "supported_io_types": { 00:11:25.206 "read": true, 00:11:25.206 "write": true, 00:11:25.206 "unmap": true, 00:11:25.206 "flush": true, 00:11:25.206 "reset": true, 00:11:25.206 "nvme_admin": false, 00:11:25.206 "nvme_io": false, 00:11:25.206 "nvme_io_md": false, 00:11:25.206 "write_zeroes": true, 00:11:25.206 "zcopy": true, 00:11:25.206 "get_zone_info": false, 00:11:25.206 "zone_management": false, 00:11:25.206 "zone_append": false, 00:11:25.206 "compare": false, 00:11:25.206 "compare_and_write": false, 00:11:25.206 "abort": true, 00:11:25.206 "seek_hole": false, 00:11:25.206 "seek_data": false, 00:11:25.206 "copy": true, 00:11:25.206 "nvme_iov_md": false 00:11:25.206 }, 00:11:25.206 "memory_domains": [ 00:11:25.206 { 00:11:25.206 "dma_device_id": "system", 00:11:25.206 "dma_device_type": 1 00:11:25.206 }, 00:11:25.206 { 00:11:25.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.206 "dma_device_type": 2 00:11:25.206 } 00:11:25.206 ], 00:11:25.206 "driver_specific": {} 00:11:25.206 } 00:11:25.206 ] 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.206 "name": "Existed_Raid", 00:11:25.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.206 "strip_size_kb": 64, 00:11:25.206 "state": "configuring", 00:11:25.206 "raid_level": "raid0", 00:11:25.206 "superblock": false, 00:11:25.206 "num_base_bdevs": 2, 00:11:25.206 "num_base_bdevs_discovered": 1, 00:11:25.206 "num_base_bdevs_operational": 2, 00:11:25.206 "base_bdevs_list": [ 00:11:25.206 { 00:11:25.206 "name": "BaseBdev1", 00:11:25.206 "uuid": "4262f6ad-91d8-42bc-8b80-ea405d14f1cb", 00:11:25.206 "is_configured": true, 00:11:25.206 "data_offset": 0, 00:11:25.206 "data_size": 65536 00:11:25.206 }, 00:11:25.206 { 00:11:25.206 "name": "BaseBdev2", 00:11:25.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.206 "is_configured": false, 00:11:25.206 "data_offset": 0, 00:11:25.206 "data_size": 0 00:11:25.206 } 00:11:25.206 ] 00:11:25.206 }' 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.206 10:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.791 [2024-10-30 10:39:47.022011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:25.791 [2024-10-30 10:39:47.022072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.791 [2024-10-30 10:39:47.030050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.791 [2024-10-30 10:39:47.032411] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.791 [2024-10-30 10:39:47.032466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.791 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.791 "name": "Existed_Raid", 00:11:25.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.791 "strip_size_kb": 64, 00:11:25.791 "state": "configuring", 00:11:25.791 "raid_level": "raid0", 00:11:25.791 "superblock": false, 00:11:25.791 "num_base_bdevs": 2, 00:11:25.791 "num_base_bdevs_discovered": 1, 00:11:25.791 "num_base_bdevs_operational": 2, 00:11:25.791 "base_bdevs_list": [ 00:11:25.791 { 00:11:25.791 "name": "BaseBdev1", 00:11:25.791 "uuid": "4262f6ad-91d8-42bc-8b80-ea405d14f1cb", 00:11:25.791 "is_configured": true, 00:11:25.791 "data_offset": 0, 00:11:25.791 "data_size": 65536 00:11:25.791 }, 00:11:25.791 { 00:11:25.792 "name": "BaseBdev2", 00:11:25.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.792 "is_configured": false, 00:11:25.792 "data_offset": 0, 00:11:25.792 "data_size": 0 00:11:25.792 } 00:11:25.792 ] 00:11:25.792 }' 00:11:25.792 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.792 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 [2024-10-30 10:39:47.577144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.359 [2024-10-30 10:39:47.577204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.359 [2024-10-30 10:39:47.577218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:26.359 [2024-10-30 10:39:47.577552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:26.359 [2024-10-30 10:39:47.577757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.359 [2024-10-30 10:39:47.577781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:26.359 [2024-10-30 10:39:47.578152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.359 BaseBdev2 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 [ 00:11:26.359 { 00:11:26.359 "name": "BaseBdev2", 00:11:26.359 "aliases": [ 00:11:26.359 "3759a97b-6a5f-4ef4-a163-637c7bf33560" 00:11:26.359 ], 00:11:26.359 "product_name": "Malloc disk", 00:11:26.359 "block_size": 512, 00:11:26.359 "num_blocks": 65536, 00:11:26.359 "uuid": "3759a97b-6a5f-4ef4-a163-637c7bf33560", 00:11:26.359 "assigned_rate_limits": { 00:11:26.359 "rw_ios_per_sec": 0, 00:11:26.359 "rw_mbytes_per_sec": 0, 00:11:26.359 "r_mbytes_per_sec": 0, 00:11:26.359 "w_mbytes_per_sec": 0 00:11:26.359 }, 00:11:26.359 "claimed": true, 00:11:26.359 "claim_type": "exclusive_write", 00:11:26.359 "zoned": false, 00:11:26.359 "supported_io_types": { 00:11:26.359 "read": true, 00:11:26.359 "write": true, 00:11:26.359 "unmap": true, 00:11:26.359 "flush": true, 00:11:26.359 "reset": true, 00:11:26.359 "nvme_admin": false, 00:11:26.359 "nvme_io": false, 00:11:26.359 "nvme_io_md": false, 00:11:26.359 "write_zeroes": true, 00:11:26.359 "zcopy": true, 00:11:26.359 "get_zone_info": false, 00:11:26.359 "zone_management": false, 00:11:26.359 "zone_append": false, 00:11:26.359 "compare": false, 00:11:26.359 "compare_and_write": false, 00:11:26.359 "abort": true, 00:11:26.359 "seek_hole": false, 00:11:26.359 "seek_data": false, 00:11:26.359 "copy": true, 00:11:26.359 "nvme_iov_md": false 00:11:26.359 }, 00:11:26.359 "memory_domains": [ 00:11:26.359 { 00:11:26.359 "dma_device_id": "system", 00:11:26.359 "dma_device_type": 1 00:11:26.359 }, 00:11:26.359 { 00:11:26.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.359 "dma_device_type": 2 00:11:26.359 } 00:11:26.359 ], 00:11:26.359 "driver_specific": {} 00:11:26.359 } 00:11:26.359 ] 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.359 "name": "Existed_Raid", 00:11:26.359 "uuid": "ce521248-7d5d-415a-9759-4a5f0b2d35ec", 00:11:26.359 "strip_size_kb": 64, 00:11:26.359 "state": "online", 00:11:26.359 "raid_level": "raid0", 00:11:26.359 "superblock": false, 00:11:26.359 "num_base_bdevs": 2, 00:11:26.359 "num_base_bdevs_discovered": 2, 00:11:26.359 "num_base_bdevs_operational": 2, 00:11:26.359 "base_bdevs_list": [ 00:11:26.359 { 00:11:26.359 "name": "BaseBdev1", 00:11:26.359 "uuid": "4262f6ad-91d8-42bc-8b80-ea405d14f1cb", 00:11:26.359 "is_configured": true, 00:11:26.359 "data_offset": 0, 00:11:26.359 "data_size": 65536 00:11:26.359 }, 00:11:26.359 { 00:11:26.359 "name": "BaseBdev2", 00:11:26.359 "uuid": "3759a97b-6a5f-4ef4-a163-637c7bf33560", 00:11:26.359 "is_configured": true, 00:11:26.359 "data_offset": 0, 00:11:26.359 "data_size": 65536 00:11:26.359 } 00:11:26.359 ] 00:11:26.359 }' 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.359 10:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.926 [2024-10-30 10:39:48.101666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.926 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.926 "name": "Existed_Raid", 00:11:26.926 "aliases": [ 00:11:26.926 "ce521248-7d5d-415a-9759-4a5f0b2d35ec" 00:11:26.926 ], 00:11:26.926 "product_name": "Raid Volume", 00:11:26.926 "block_size": 512, 00:11:26.926 "num_blocks": 131072, 00:11:26.926 "uuid": "ce521248-7d5d-415a-9759-4a5f0b2d35ec", 00:11:26.926 "assigned_rate_limits": { 00:11:26.926 "rw_ios_per_sec": 0, 00:11:26.926 "rw_mbytes_per_sec": 0, 00:11:26.926 "r_mbytes_per_sec": 0, 00:11:26.926 "w_mbytes_per_sec": 0 00:11:26.926 }, 00:11:26.926 "claimed": false, 00:11:26.926 "zoned": false, 00:11:26.926 "supported_io_types": { 00:11:26.926 "read": true, 00:11:26.926 "write": true, 00:11:26.926 "unmap": true, 00:11:26.926 "flush": true, 00:11:26.926 "reset": true, 00:11:26.926 "nvme_admin": false, 00:11:26.926 "nvme_io": false, 00:11:26.926 "nvme_io_md": false, 00:11:26.926 "write_zeroes": true, 00:11:26.926 "zcopy": false, 00:11:26.926 "get_zone_info": false, 00:11:26.926 "zone_management": false, 00:11:26.926 "zone_append": false, 00:11:26.926 "compare": false, 00:11:26.926 "compare_and_write": false, 00:11:26.926 "abort": false, 00:11:26.926 "seek_hole": false, 00:11:26.926 "seek_data": false, 00:11:26.926 "copy": false, 00:11:26.926 "nvme_iov_md": false 00:11:26.926 }, 00:11:26.926 "memory_domains": [ 00:11:26.926 { 00:11:26.927 "dma_device_id": "system", 00:11:26.927 "dma_device_type": 1 00:11:26.927 }, 00:11:26.927 { 00:11:26.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.927 "dma_device_type": 2 00:11:26.927 }, 00:11:26.927 { 00:11:26.927 "dma_device_id": "system", 00:11:26.927 "dma_device_type": 1 00:11:26.927 }, 00:11:26.927 { 00:11:26.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.927 "dma_device_type": 2 00:11:26.927 } 00:11:26.927 ], 00:11:26.927 "driver_specific": { 00:11:26.927 "raid": { 00:11:26.927 "uuid": "ce521248-7d5d-415a-9759-4a5f0b2d35ec", 00:11:26.927 "strip_size_kb": 64, 00:11:26.927 "state": "online", 00:11:26.927 "raid_level": "raid0", 00:11:26.927 "superblock": false, 00:11:26.927 "num_base_bdevs": 2, 00:11:26.927 "num_base_bdevs_discovered": 2, 00:11:26.927 "num_base_bdevs_operational": 2, 00:11:26.927 "base_bdevs_list": [ 00:11:26.927 { 00:11:26.927 "name": "BaseBdev1", 00:11:26.927 "uuid": "4262f6ad-91d8-42bc-8b80-ea405d14f1cb", 00:11:26.927 "is_configured": true, 00:11:26.927 "data_offset": 0, 00:11:26.927 "data_size": 65536 00:11:26.927 }, 00:11:26.927 { 00:11:26.927 "name": "BaseBdev2", 00:11:26.927 "uuid": "3759a97b-6a5f-4ef4-a163-637c7bf33560", 00:11:26.927 "is_configured": true, 00:11:26.927 "data_offset": 0, 00:11:26.927 "data_size": 65536 00:11:26.927 } 00:11:26.927 ] 00:11:26.927 } 00:11:26.927 } 00:11:26.927 }' 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:26.927 BaseBdev2' 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.927 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.927 [2024-10-30 10:39:48.361487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.927 [2024-10-30 10:39:48.361532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.927 [2024-10-30 10:39:48.361600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.185 "name": "Existed_Raid", 00:11:27.185 "uuid": "ce521248-7d5d-415a-9759-4a5f0b2d35ec", 00:11:27.185 "strip_size_kb": 64, 00:11:27.185 "state": "offline", 00:11:27.185 "raid_level": "raid0", 00:11:27.185 "superblock": false, 00:11:27.185 "num_base_bdevs": 2, 00:11:27.185 "num_base_bdevs_discovered": 1, 00:11:27.185 "num_base_bdevs_operational": 1, 00:11:27.185 "base_bdevs_list": [ 00:11:27.185 { 00:11:27.185 "name": null, 00:11:27.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.185 "is_configured": false, 00:11:27.185 "data_offset": 0, 00:11:27.185 "data_size": 65536 00:11:27.185 }, 00:11:27.185 { 00:11:27.185 "name": "BaseBdev2", 00:11:27.185 "uuid": "3759a97b-6a5f-4ef4-a163-637c7bf33560", 00:11:27.185 "is_configured": true, 00:11:27.185 "data_offset": 0, 00:11:27.185 "data_size": 65536 00:11:27.185 } 00:11:27.185 ] 00:11:27.185 }' 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.185 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.753 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:27.753 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.753 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.753 10:39:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.753 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.753 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.753 10:39:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.753 [2024-10-30 10:39:49.012447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.753 [2024-10-30 10:39:49.012515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:27.753 10:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60828 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60828 ']' 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60828 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60828 00:11:27.754 killing process with pid 60828 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60828' 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60828 00:11:27.754 [2024-10-30 10:39:49.179841] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.754 10:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60828 00:11:27.754 [2024-10-30 10:39:49.194434] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.131 10:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:29.131 00:11:29.131 real 0m5.456s 00:11:29.131 user 0m8.255s 00:11:29.131 sys 0m0.779s 00:11:29.131 ************************************ 00:11:29.132 END TEST raid_state_function_test 00:11:29.132 ************************************ 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.132 10:39:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:11:29.132 10:39:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:29.132 10:39:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.132 10:39:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.132 ************************************ 00:11:29.132 START TEST raid_state_function_test_sb 00:11:29.132 ************************************ 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:29.132 Process raid pid: 61081 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61081 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61081' 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61081 00:11:29.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61081 ']' 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:29.132 10:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.132 [2024-10-30 10:39:50.388346] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:29.132 [2024-10-30 10:39:50.388834] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.391 [2024-10-30 10:39:50.601384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.391 [2024-10-30 10:39:50.748527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.649 [2024-10-30 10:39:50.950983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.649 [2024-10-30 10:39:50.951274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.217 [2024-10-30 10:39:51.391346] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.217 [2024-10-30 10:39:51.391408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.217 [2024-10-30 10:39:51.391424] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.217 [2024-10-30 10:39:51.391440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.217 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.217 "name": "Existed_Raid", 00:11:30.217 "uuid": "e5ccce5f-2d5b-44d7-86de-66b0c3fe3293", 00:11:30.217 "strip_size_kb": 64, 00:11:30.217 "state": "configuring", 00:11:30.217 "raid_level": "raid0", 00:11:30.217 "superblock": true, 00:11:30.217 "num_base_bdevs": 2, 00:11:30.217 "num_base_bdevs_discovered": 0, 00:11:30.217 "num_base_bdevs_operational": 2, 00:11:30.217 "base_bdevs_list": [ 00:11:30.217 { 00:11:30.218 "name": "BaseBdev1", 00:11:30.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.218 "is_configured": false, 00:11:30.218 "data_offset": 0, 00:11:30.218 "data_size": 0 00:11:30.218 }, 00:11:30.218 { 00:11:30.218 "name": "BaseBdev2", 00:11:30.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.218 "is_configured": false, 00:11:30.218 "data_offset": 0, 00:11:30.218 "data_size": 0 00:11:30.218 } 00:11:30.218 ] 00:11:30.218 }' 00:11:30.218 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.218 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.482 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.482 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.482 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.755 [2024-10-30 10:39:51.955411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.755 [2024-10-30 10:39:51.955454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:30.755 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.755 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:30.755 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.755 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.755 [2024-10-30 10:39:51.963396] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.755 [2024-10-30 10:39:51.963449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.755 [2024-10-30 10:39:51.963464] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.755 [2024-10-30 10:39:51.963482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.755 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.755 10:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:30.755 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.755 10:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.755 [2024-10-30 10:39:52.007867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.755 BaseBdev1 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.755 [ 00:11:30.755 { 00:11:30.755 "name": "BaseBdev1", 00:11:30.755 "aliases": [ 00:11:30.755 "eed1eb55-7545-4bb7-8b2f-b2d22624fd80" 00:11:30.755 ], 00:11:30.755 "product_name": "Malloc disk", 00:11:30.755 "block_size": 512, 00:11:30.755 "num_blocks": 65536, 00:11:30.755 "uuid": "eed1eb55-7545-4bb7-8b2f-b2d22624fd80", 00:11:30.755 "assigned_rate_limits": { 00:11:30.755 "rw_ios_per_sec": 0, 00:11:30.755 "rw_mbytes_per_sec": 0, 00:11:30.755 "r_mbytes_per_sec": 0, 00:11:30.755 "w_mbytes_per_sec": 0 00:11:30.755 }, 00:11:30.755 "claimed": true, 00:11:30.755 "claim_type": "exclusive_write", 00:11:30.755 "zoned": false, 00:11:30.755 "supported_io_types": { 00:11:30.755 "read": true, 00:11:30.755 "write": true, 00:11:30.755 "unmap": true, 00:11:30.755 "flush": true, 00:11:30.755 "reset": true, 00:11:30.755 "nvme_admin": false, 00:11:30.755 "nvme_io": false, 00:11:30.755 "nvme_io_md": false, 00:11:30.755 "write_zeroes": true, 00:11:30.755 "zcopy": true, 00:11:30.755 "get_zone_info": false, 00:11:30.755 "zone_management": false, 00:11:30.755 "zone_append": false, 00:11:30.755 "compare": false, 00:11:30.755 "compare_and_write": false, 00:11:30.755 "abort": true, 00:11:30.755 "seek_hole": false, 00:11:30.755 "seek_data": false, 00:11:30.755 "copy": true, 00:11:30.755 "nvme_iov_md": false 00:11:30.755 }, 00:11:30.755 "memory_domains": [ 00:11:30.755 { 00:11:30.755 "dma_device_id": "system", 00:11:30.755 "dma_device_type": 1 00:11:30.755 }, 00:11:30.755 { 00:11:30.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.755 "dma_device_type": 2 00:11:30.755 } 00:11:30.755 ], 00:11:30.755 "driver_specific": {} 00:11:30.755 } 00:11:30.755 ] 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.755 "name": "Existed_Raid", 00:11:30.755 "uuid": "27653f63-d92f-4912-80d8-5ef84c7166b5", 00:11:30.755 "strip_size_kb": 64, 00:11:30.755 "state": "configuring", 00:11:30.755 "raid_level": "raid0", 00:11:30.755 "superblock": true, 00:11:30.755 "num_base_bdevs": 2, 00:11:30.755 "num_base_bdevs_discovered": 1, 00:11:30.755 "num_base_bdevs_operational": 2, 00:11:30.755 "base_bdevs_list": [ 00:11:30.755 { 00:11:30.755 "name": "BaseBdev1", 00:11:30.755 "uuid": "eed1eb55-7545-4bb7-8b2f-b2d22624fd80", 00:11:30.755 "is_configured": true, 00:11:30.755 "data_offset": 2048, 00:11:30.755 "data_size": 63488 00:11:30.755 }, 00:11:30.755 { 00:11:30.755 "name": "BaseBdev2", 00:11:30.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.755 "is_configured": false, 00:11:30.755 "data_offset": 0, 00:11:30.755 "data_size": 0 00:11:30.755 } 00:11:30.755 ] 00:11:30.755 }' 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.755 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.323 [2024-10-30 10:39:52.544065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.323 [2024-10-30 10:39:52.544129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.323 [2024-10-30 10:39:52.552150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.323 [2024-10-30 10:39:52.554618] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.323 [2024-10-30 10:39:52.554813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.323 "name": "Existed_Raid", 00:11:31.323 "uuid": "8087d5f4-65ce-4103-8761-1813e7e3c412", 00:11:31.323 "strip_size_kb": 64, 00:11:31.323 "state": "configuring", 00:11:31.323 "raid_level": "raid0", 00:11:31.323 "superblock": true, 00:11:31.323 "num_base_bdevs": 2, 00:11:31.323 "num_base_bdevs_discovered": 1, 00:11:31.323 "num_base_bdevs_operational": 2, 00:11:31.323 "base_bdevs_list": [ 00:11:31.323 { 00:11:31.323 "name": "BaseBdev1", 00:11:31.323 "uuid": "eed1eb55-7545-4bb7-8b2f-b2d22624fd80", 00:11:31.323 "is_configured": true, 00:11:31.323 "data_offset": 2048, 00:11:31.323 "data_size": 63488 00:11:31.323 }, 00:11:31.323 { 00:11:31.323 "name": "BaseBdev2", 00:11:31.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.323 "is_configured": false, 00:11:31.323 "data_offset": 0, 00:11:31.323 "data_size": 0 00:11:31.323 } 00:11:31.323 ] 00:11:31.323 }' 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.323 10:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.892 [2024-10-30 10:39:53.122270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.892 [2024-10-30 10:39:53.122567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:31.892 [2024-10-30 10:39:53.122587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:31.892 BaseBdev2 00:11:31.892 [2024-10-30 10:39:53.122915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:31.892 [2024-10-30 10:39:53.123131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:31.892 [2024-10-30 10:39:53.123159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:31.892 [2024-10-30 10:39:53.123348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.892 [ 00:11:31.892 { 00:11:31.892 "name": "BaseBdev2", 00:11:31.892 "aliases": [ 00:11:31.892 "f0161177-f298-4b70-98b7-2fe435d1867d" 00:11:31.892 ], 00:11:31.892 "product_name": "Malloc disk", 00:11:31.892 "block_size": 512, 00:11:31.892 "num_blocks": 65536, 00:11:31.892 "uuid": "f0161177-f298-4b70-98b7-2fe435d1867d", 00:11:31.892 "assigned_rate_limits": { 00:11:31.892 "rw_ios_per_sec": 0, 00:11:31.892 "rw_mbytes_per_sec": 0, 00:11:31.892 "r_mbytes_per_sec": 0, 00:11:31.892 "w_mbytes_per_sec": 0 00:11:31.892 }, 00:11:31.892 "claimed": true, 00:11:31.892 "claim_type": "exclusive_write", 00:11:31.892 "zoned": false, 00:11:31.892 "supported_io_types": { 00:11:31.892 "read": true, 00:11:31.892 "write": true, 00:11:31.892 "unmap": true, 00:11:31.892 "flush": true, 00:11:31.892 "reset": true, 00:11:31.892 "nvme_admin": false, 00:11:31.892 "nvme_io": false, 00:11:31.892 "nvme_io_md": false, 00:11:31.892 "write_zeroes": true, 00:11:31.892 "zcopy": true, 00:11:31.892 "get_zone_info": false, 00:11:31.892 "zone_management": false, 00:11:31.892 "zone_append": false, 00:11:31.892 "compare": false, 00:11:31.892 "compare_and_write": false, 00:11:31.892 "abort": true, 00:11:31.892 "seek_hole": false, 00:11:31.892 "seek_data": false, 00:11:31.892 "copy": true, 00:11:31.892 "nvme_iov_md": false 00:11:31.892 }, 00:11:31.892 "memory_domains": [ 00:11:31.892 { 00:11:31.892 "dma_device_id": "system", 00:11:31.892 "dma_device_type": 1 00:11:31.892 }, 00:11:31.892 { 00:11:31.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.892 "dma_device_type": 2 00:11:31.892 } 00:11:31.892 ], 00:11:31.892 "driver_specific": {} 00:11:31.892 } 00:11:31.892 ] 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.892 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.893 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.893 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.893 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.893 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.893 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.893 "name": "Existed_Raid", 00:11:31.893 "uuid": "8087d5f4-65ce-4103-8761-1813e7e3c412", 00:11:31.893 "strip_size_kb": 64, 00:11:31.893 "state": "online", 00:11:31.893 "raid_level": "raid0", 00:11:31.893 "superblock": true, 00:11:31.893 "num_base_bdevs": 2, 00:11:31.893 "num_base_bdevs_discovered": 2, 00:11:31.893 "num_base_bdevs_operational": 2, 00:11:31.893 "base_bdevs_list": [ 00:11:31.893 { 00:11:31.893 "name": "BaseBdev1", 00:11:31.893 "uuid": "eed1eb55-7545-4bb7-8b2f-b2d22624fd80", 00:11:31.893 "is_configured": true, 00:11:31.893 "data_offset": 2048, 00:11:31.893 "data_size": 63488 00:11:31.893 }, 00:11:31.893 { 00:11:31.893 "name": "BaseBdev2", 00:11:31.893 "uuid": "f0161177-f298-4b70-98b7-2fe435d1867d", 00:11:31.893 "is_configured": true, 00:11:31.893 "data_offset": 2048, 00:11:31.893 "data_size": 63488 00:11:31.893 } 00:11:31.893 ] 00:11:31.893 }' 00:11:31.893 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.893 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.459 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.459 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.459 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.459 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.459 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.459 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.459 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.459 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.459 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.460 [2024-10-30 10:39:53.654846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.460 "name": "Existed_Raid", 00:11:32.460 "aliases": [ 00:11:32.460 "8087d5f4-65ce-4103-8761-1813e7e3c412" 00:11:32.460 ], 00:11:32.460 "product_name": "Raid Volume", 00:11:32.460 "block_size": 512, 00:11:32.460 "num_blocks": 126976, 00:11:32.460 "uuid": "8087d5f4-65ce-4103-8761-1813e7e3c412", 00:11:32.460 "assigned_rate_limits": { 00:11:32.460 "rw_ios_per_sec": 0, 00:11:32.460 "rw_mbytes_per_sec": 0, 00:11:32.460 "r_mbytes_per_sec": 0, 00:11:32.460 "w_mbytes_per_sec": 0 00:11:32.460 }, 00:11:32.460 "claimed": false, 00:11:32.460 "zoned": false, 00:11:32.460 "supported_io_types": { 00:11:32.460 "read": true, 00:11:32.460 "write": true, 00:11:32.460 "unmap": true, 00:11:32.460 "flush": true, 00:11:32.460 "reset": true, 00:11:32.460 "nvme_admin": false, 00:11:32.460 "nvme_io": false, 00:11:32.460 "nvme_io_md": false, 00:11:32.460 "write_zeroes": true, 00:11:32.460 "zcopy": false, 00:11:32.460 "get_zone_info": false, 00:11:32.460 "zone_management": false, 00:11:32.460 "zone_append": false, 00:11:32.460 "compare": false, 00:11:32.460 "compare_and_write": false, 00:11:32.460 "abort": false, 00:11:32.460 "seek_hole": false, 00:11:32.460 "seek_data": false, 00:11:32.460 "copy": false, 00:11:32.460 "nvme_iov_md": false 00:11:32.460 }, 00:11:32.460 "memory_domains": [ 00:11:32.460 { 00:11:32.460 "dma_device_id": "system", 00:11:32.460 "dma_device_type": 1 00:11:32.460 }, 00:11:32.460 { 00:11:32.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.460 "dma_device_type": 2 00:11:32.460 }, 00:11:32.460 { 00:11:32.460 "dma_device_id": "system", 00:11:32.460 "dma_device_type": 1 00:11:32.460 }, 00:11:32.460 { 00:11:32.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.460 "dma_device_type": 2 00:11:32.460 } 00:11:32.460 ], 00:11:32.460 "driver_specific": { 00:11:32.460 "raid": { 00:11:32.460 "uuid": "8087d5f4-65ce-4103-8761-1813e7e3c412", 00:11:32.460 "strip_size_kb": 64, 00:11:32.460 "state": "online", 00:11:32.460 "raid_level": "raid0", 00:11:32.460 "superblock": true, 00:11:32.460 "num_base_bdevs": 2, 00:11:32.460 "num_base_bdevs_discovered": 2, 00:11:32.460 "num_base_bdevs_operational": 2, 00:11:32.460 "base_bdevs_list": [ 00:11:32.460 { 00:11:32.460 "name": "BaseBdev1", 00:11:32.460 "uuid": "eed1eb55-7545-4bb7-8b2f-b2d22624fd80", 00:11:32.460 "is_configured": true, 00:11:32.460 "data_offset": 2048, 00:11:32.460 "data_size": 63488 00:11:32.460 }, 00:11:32.460 { 00:11:32.460 "name": "BaseBdev2", 00:11:32.460 "uuid": "f0161177-f298-4b70-98b7-2fe435d1867d", 00:11:32.460 "is_configured": true, 00:11:32.460 "data_offset": 2048, 00:11:32.460 "data_size": 63488 00:11:32.460 } 00:11:32.460 ] 00:11:32.460 } 00:11:32.460 } 00:11:32.460 }' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:32.460 BaseBdev2' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.460 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.460 [2024-10-30 10:39:53.910630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.460 [2024-10-30 10:39:53.910672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.460 [2024-10-30 10:39:53.910739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.720 10:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.720 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:32.720 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:32.720 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.720 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:32.720 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:32.720 10:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.720 "name": "Existed_Raid", 00:11:32.720 "uuid": "8087d5f4-65ce-4103-8761-1813e7e3c412", 00:11:32.720 "strip_size_kb": 64, 00:11:32.720 "state": "offline", 00:11:32.720 "raid_level": "raid0", 00:11:32.720 "superblock": true, 00:11:32.720 "num_base_bdevs": 2, 00:11:32.720 "num_base_bdevs_discovered": 1, 00:11:32.720 "num_base_bdevs_operational": 1, 00:11:32.720 "base_bdevs_list": [ 00:11:32.720 { 00:11:32.720 "name": null, 00:11:32.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.720 "is_configured": false, 00:11:32.720 "data_offset": 0, 00:11:32.720 "data_size": 63488 00:11:32.720 }, 00:11:32.720 { 00:11:32.720 "name": "BaseBdev2", 00:11:32.720 "uuid": "f0161177-f298-4b70-98b7-2fe435d1867d", 00:11:32.720 "is_configured": true, 00:11:32.720 "data_offset": 2048, 00:11:32.720 "data_size": 63488 00:11:32.720 } 00:11:32.720 ] 00:11:32.720 }' 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.720 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.290 [2024-10-30 10:39:54.533301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:33.290 [2024-10-30 10:39:54.533369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61081 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61081 ']' 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61081 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61081 00:11:33.290 killing process with pid 61081 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61081' 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61081 00:11:33.290 [2024-10-30 10:39:54.700826] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.290 10:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61081 00:11:33.290 [2024-10-30 10:39:54.715422] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.669 10:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:34.669 00:11:34.669 real 0m5.474s 00:11:34.669 user 0m8.278s 00:11:34.669 sys 0m0.758s 00:11:34.669 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.669 ************************************ 00:11:34.669 END TEST raid_state_function_test_sb 00:11:34.669 ************************************ 00:11:34.669 10:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.669 10:39:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:11:34.669 10:39:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:34.669 10:39:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.669 10:39:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.669 ************************************ 00:11:34.669 START TEST raid_superblock_test 00:11:34.669 ************************************ 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61334 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61334 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61334 ']' 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:34.669 10:39:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.669 [2024-10-30 10:39:55.895485] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:34.669 [2024-10-30 10:39:55.895700] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61334 ] 00:11:34.669 [2024-10-30 10:39:56.088691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.928 [2024-10-30 10:39:56.239626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.187 [2024-10-30 10:39:56.445844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.187 [2024-10-30 10:39:56.445923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.446 malloc1 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.446 [2024-10-30 10:39:56.898963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:35.446 [2024-10-30 10:39:56.899206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.446 [2024-10-30 10:39:56.899369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:35.446 [2024-10-30 10:39:56.899495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.446 [2024-10-30 10:39:56.902565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.446 [2024-10-30 10:39:56.902735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:35.446 pt1 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.446 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.706 malloc2 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.706 [2024-10-30 10:39:56.955161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.706 [2024-10-30 10:39:56.955237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.706 [2024-10-30 10:39:56.955270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:35.706 [2024-10-30 10:39:56.955285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.706 [2024-10-30 10:39:56.958107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.706 [2024-10-30 10:39:56.958152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.706 pt2 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.706 [2024-10-30 10:39:56.967244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:35.706 [2024-10-30 10:39:56.969681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.706 [2024-10-30 10:39:56.970043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:35.706 [2024-10-30 10:39:56.970068] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:35.706 [2024-10-30 10:39:56.970382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:35.706 [2024-10-30 10:39:56.970585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:35.706 [2024-10-30 10:39:56.970607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:35.706 [2024-10-30 10:39:56.970787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.706 10:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.706 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.706 "name": "raid_bdev1", 00:11:35.706 "uuid": "0400108a-ad3e-42c0-bf37-8552a66bca5c", 00:11:35.706 "strip_size_kb": 64, 00:11:35.706 "state": "online", 00:11:35.706 "raid_level": "raid0", 00:11:35.706 "superblock": true, 00:11:35.706 "num_base_bdevs": 2, 00:11:35.706 "num_base_bdevs_discovered": 2, 00:11:35.706 "num_base_bdevs_operational": 2, 00:11:35.706 "base_bdevs_list": [ 00:11:35.706 { 00:11:35.706 "name": "pt1", 00:11:35.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.706 "is_configured": true, 00:11:35.706 "data_offset": 2048, 00:11:35.706 "data_size": 63488 00:11:35.706 }, 00:11:35.706 { 00:11:35.706 "name": "pt2", 00:11:35.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.706 "is_configured": true, 00:11:35.706 "data_offset": 2048, 00:11:35.706 "data_size": 63488 00:11:35.706 } 00:11:35.706 ] 00:11:35.706 }' 00:11:35.706 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.706 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.275 [2024-10-30 10:39:57.471694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.275 "name": "raid_bdev1", 00:11:36.275 "aliases": [ 00:11:36.275 "0400108a-ad3e-42c0-bf37-8552a66bca5c" 00:11:36.275 ], 00:11:36.275 "product_name": "Raid Volume", 00:11:36.275 "block_size": 512, 00:11:36.275 "num_blocks": 126976, 00:11:36.275 "uuid": "0400108a-ad3e-42c0-bf37-8552a66bca5c", 00:11:36.275 "assigned_rate_limits": { 00:11:36.275 "rw_ios_per_sec": 0, 00:11:36.275 "rw_mbytes_per_sec": 0, 00:11:36.275 "r_mbytes_per_sec": 0, 00:11:36.275 "w_mbytes_per_sec": 0 00:11:36.275 }, 00:11:36.275 "claimed": false, 00:11:36.275 "zoned": false, 00:11:36.275 "supported_io_types": { 00:11:36.275 "read": true, 00:11:36.275 "write": true, 00:11:36.275 "unmap": true, 00:11:36.275 "flush": true, 00:11:36.275 "reset": true, 00:11:36.275 "nvme_admin": false, 00:11:36.275 "nvme_io": false, 00:11:36.275 "nvme_io_md": false, 00:11:36.275 "write_zeroes": true, 00:11:36.275 "zcopy": false, 00:11:36.275 "get_zone_info": false, 00:11:36.275 "zone_management": false, 00:11:36.275 "zone_append": false, 00:11:36.275 "compare": false, 00:11:36.275 "compare_and_write": false, 00:11:36.275 "abort": false, 00:11:36.275 "seek_hole": false, 00:11:36.275 "seek_data": false, 00:11:36.275 "copy": false, 00:11:36.275 "nvme_iov_md": false 00:11:36.275 }, 00:11:36.275 "memory_domains": [ 00:11:36.275 { 00:11:36.275 "dma_device_id": "system", 00:11:36.275 "dma_device_type": 1 00:11:36.275 }, 00:11:36.275 { 00:11:36.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.275 "dma_device_type": 2 00:11:36.275 }, 00:11:36.275 { 00:11:36.275 "dma_device_id": "system", 00:11:36.275 "dma_device_type": 1 00:11:36.275 }, 00:11:36.275 { 00:11:36.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.275 "dma_device_type": 2 00:11:36.275 } 00:11:36.275 ], 00:11:36.275 "driver_specific": { 00:11:36.275 "raid": { 00:11:36.275 "uuid": "0400108a-ad3e-42c0-bf37-8552a66bca5c", 00:11:36.275 "strip_size_kb": 64, 00:11:36.275 "state": "online", 00:11:36.275 "raid_level": "raid0", 00:11:36.275 "superblock": true, 00:11:36.275 "num_base_bdevs": 2, 00:11:36.275 "num_base_bdevs_discovered": 2, 00:11:36.275 "num_base_bdevs_operational": 2, 00:11:36.275 "base_bdevs_list": [ 00:11:36.275 { 00:11:36.275 "name": "pt1", 00:11:36.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.275 "is_configured": true, 00:11:36.275 "data_offset": 2048, 00:11:36.275 "data_size": 63488 00:11:36.275 }, 00:11:36.275 { 00:11:36.275 "name": "pt2", 00:11:36.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.275 "is_configured": true, 00:11:36.275 "data_offset": 2048, 00:11:36.275 "data_size": 63488 00:11:36.275 } 00:11:36.275 ] 00:11:36.275 } 00:11:36.275 } 00:11:36.275 }' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:36.275 pt2' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.275 [2024-10-30 10:39:57.715764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.275 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0400108a-ad3e-42c0-bf37-8552a66bca5c 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0400108a-ad3e-42c0-bf37-8552a66bca5c ']' 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.534 [2024-10-30 10:39:57.763387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.534 [2024-10-30 10:39:57.763422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.534 [2024-10-30 10:39:57.763553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.534 [2024-10-30 10:39:57.763619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.534 [2024-10-30 10:39:57.763638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.534 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.535 [2024-10-30 10:39:57.887468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:36.535 [2024-10-30 10:39:57.890016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:36.535 [2024-10-30 10:39:57.890113] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:36.535 [2024-10-30 10:39:57.890190] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:36.535 [2024-10-30 10:39:57.890218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.535 [2024-10-30 10:39:57.890236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:36.535 request: 00:11:36.535 { 00:11:36.535 "name": "raid_bdev1", 00:11:36.535 "raid_level": "raid0", 00:11:36.535 "base_bdevs": [ 00:11:36.535 "malloc1", 00:11:36.535 "malloc2" 00:11:36.535 ], 00:11:36.535 "strip_size_kb": 64, 00:11:36.535 "superblock": false, 00:11:36.535 "method": "bdev_raid_create", 00:11:36.535 "req_id": 1 00:11:36.535 } 00:11:36.535 Got JSON-RPC error response 00:11:36.535 response: 00:11:36.535 { 00:11:36.535 "code": -17, 00:11:36.535 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:36.535 } 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.535 [2024-10-30 10:39:57.951456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.535 [2024-10-30 10:39:57.951692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.535 [2024-10-30 10:39:57.951772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:36.535 [2024-10-30 10:39:57.951889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.535 [2024-10-30 10:39:57.954839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.535 [2024-10-30 10:39:57.955044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.535 [2024-10-30 10:39:57.955265] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:36.535 [2024-10-30 10:39:57.955489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.535 pt1 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.535 10:39:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.794 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.794 "name": "raid_bdev1", 00:11:36.794 "uuid": "0400108a-ad3e-42c0-bf37-8552a66bca5c", 00:11:36.794 "strip_size_kb": 64, 00:11:36.794 "state": "configuring", 00:11:36.794 "raid_level": "raid0", 00:11:36.794 "superblock": true, 00:11:36.794 "num_base_bdevs": 2, 00:11:36.794 "num_base_bdevs_discovered": 1, 00:11:36.794 "num_base_bdevs_operational": 2, 00:11:36.794 "base_bdevs_list": [ 00:11:36.794 { 00:11:36.794 "name": "pt1", 00:11:36.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.794 "is_configured": true, 00:11:36.794 "data_offset": 2048, 00:11:36.794 "data_size": 63488 00:11:36.794 }, 00:11:36.794 { 00:11:36.794 "name": null, 00:11:36.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.794 "is_configured": false, 00:11:36.794 "data_offset": 2048, 00:11:36.794 "data_size": 63488 00:11:36.794 } 00:11:36.794 ] 00:11:36.794 }' 00:11:36.794 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.794 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 [2024-10-30 10:39:58.468009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.053 [2024-10-30 10:39:58.468103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.053 [2024-10-30 10:39:58.468135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:37.053 [2024-10-30 10:39:58.468153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.053 [2024-10-30 10:39:58.468740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.053 [2024-10-30 10:39:58.468789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.053 [2024-10-30 10:39:58.468893] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:37.053 [2024-10-30 10:39:58.468930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.053 [2024-10-30 10:39:58.469102] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:37.053 [2024-10-30 10:39:58.469126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:37.053 [2024-10-30 10:39:58.469434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:37.053 [2024-10-30 10:39:58.469633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:37.053 [2024-10-30 10:39:58.469650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:37.053 [2024-10-30 10:39:58.469819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.053 pt2 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.312 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.312 "name": "raid_bdev1", 00:11:37.312 "uuid": "0400108a-ad3e-42c0-bf37-8552a66bca5c", 00:11:37.312 "strip_size_kb": 64, 00:11:37.312 "state": "online", 00:11:37.312 "raid_level": "raid0", 00:11:37.312 "superblock": true, 00:11:37.312 "num_base_bdevs": 2, 00:11:37.312 "num_base_bdevs_discovered": 2, 00:11:37.312 "num_base_bdevs_operational": 2, 00:11:37.312 "base_bdevs_list": [ 00:11:37.312 { 00:11:37.312 "name": "pt1", 00:11:37.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.312 "is_configured": true, 00:11:37.312 "data_offset": 2048, 00:11:37.312 "data_size": 63488 00:11:37.312 }, 00:11:37.312 { 00:11:37.312 "name": "pt2", 00:11:37.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.312 "is_configured": true, 00:11:37.312 "data_offset": 2048, 00:11:37.312 "data_size": 63488 00:11:37.312 } 00:11:37.312 ] 00:11:37.312 }' 00:11:37.312 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.312 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.572 [2024-10-30 10:39:58.972425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.572 10:39:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.572 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:37.572 "name": "raid_bdev1", 00:11:37.572 "aliases": [ 00:11:37.572 "0400108a-ad3e-42c0-bf37-8552a66bca5c" 00:11:37.572 ], 00:11:37.572 "product_name": "Raid Volume", 00:11:37.572 "block_size": 512, 00:11:37.572 "num_blocks": 126976, 00:11:37.572 "uuid": "0400108a-ad3e-42c0-bf37-8552a66bca5c", 00:11:37.572 "assigned_rate_limits": { 00:11:37.572 "rw_ios_per_sec": 0, 00:11:37.572 "rw_mbytes_per_sec": 0, 00:11:37.572 "r_mbytes_per_sec": 0, 00:11:37.572 "w_mbytes_per_sec": 0 00:11:37.572 }, 00:11:37.572 "claimed": false, 00:11:37.572 "zoned": false, 00:11:37.572 "supported_io_types": { 00:11:37.572 "read": true, 00:11:37.572 "write": true, 00:11:37.572 "unmap": true, 00:11:37.572 "flush": true, 00:11:37.572 "reset": true, 00:11:37.572 "nvme_admin": false, 00:11:37.572 "nvme_io": false, 00:11:37.572 "nvme_io_md": false, 00:11:37.572 "write_zeroes": true, 00:11:37.572 "zcopy": false, 00:11:37.572 "get_zone_info": false, 00:11:37.572 "zone_management": false, 00:11:37.572 "zone_append": false, 00:11:37.572 "compare": false, 00:11:37.572 "compare_and_write": false, 00:11:37.572 "abort": false, 00:11:37.572 "seek_hole": false, 00:11:37.572 "seek_data": false, 00:11:37.572 "copy": false, 00:11:37.572 "nvme_iov_md": false 00:11:37.572 }, 00:11:37.572 "memory_domains": [ 00:11:37.572 { 00:11:37.572 "dma_device_id": "system", 00:11:37.572 "dma_device_type": 1 00:11:37.572 }, 00:11:37.572 { 00:11:37.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.572 "dma_device_type": 2 00:11:37.572 }, 00:11:37.572 { 00:11:37.572 "dma_device_id": "system", 00:11:37.572 "dma_device_type": 1 00:11:37.572 }, 00:11:37.572 { 00:11:37.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.572 "dma_device_type": 2 00:11:37.572 } 00:11:37.572 ], 00:11:37.572 "driver_specific": { 00:11:37.572 "raid": { 00:11:37.572 "uuid": "0400108a-ad3e-42c0-bf37-8552a66bca5c", 00:11:37.572 "strip_size_kb": 64, 00:11:37.572 "state": "online", 00:11:37.572 "raid_level": "raid0", 00:11:37.572 "superblock": true, 00:11:37.572 "num_base_bdevs": 2, 00:11:37.572 "num_base_bdevs_discovered": 2, 00:11:37.572 "num_base_bdevs_operational": 2, 00:11:37.572 "base_bdevs_list": [ 00:11:37.573 { 00:11:37.573 "name": "pt1", 00:11:37.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.573 "is_configured": true, 00:11:37.573 "data_offset": 2048, 00:11:37.573 "data_size": 63488 00:11:37.573 }, 00:11:37.573 { 00:11:37.573 "name": "pt2", 00:11:37.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.573 "is_configured": true, 00:11:37.573 "data_offset": 2048, 00:11:37.573 "data_size": 63488 00:11:37.573 } 00:11:37.573 ] 00:11:37.573 } 00:11:37.573 } 00:11:37.573 }' 00:11:37.573 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:37.831 pt2' 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:37.831 [2024-10-30 10:39:59.252487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.831 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0400108a-ad3e-42c0-bf37-8552a66bca5c '!=' 0400108a-ad3e-42c0-bf37-8552a66bca5c ']' 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61334 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61334 ']' 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61334 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:37.832 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61334 00:11:38.090 killing process with pid 61334 00:11:38.090 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:38.090 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:38.090 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61334' 00:11:38.090 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61334 00:11:38.090 [2024-10-30 10:39:59.329016] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.090 10:39:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61334 00:11:38.090 [2024-10-30 10:39:59.329132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.090 [2024-10-30 10:39:59.329197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.090 [2024-10-30 10:39:59.329216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:38.090 [2024-10-30 10:39:59.516150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.463 10:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:39.463 00:11:39.463 real 0m4.741s 00:11:39.463 user 0m6.996s 00:11:39.463 sys 0m0.651s 00:11:39.463 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:39.463 ************************************ 00:11:39.463 END TEST raid_superblock_test 00:11:39.463 ************************************ 00:11:39.463 10:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.463 10:40:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:11:39.463 10:40:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:39.463 10:40:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:39.463 10:40:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.463 ************************************ 00:11:39.463 START TEST raid_read_error_test 00:11:39.463 ************************************ 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XHkTZYJFYp 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61550 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61550 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61550 ']' 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:39.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:39.463 10:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.463 [2024-10-30 10:40:00.691741] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:39.463 [2024-10-30 10:40:00.692245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61550 ] 00:11:39.463 [2024-10-30 10:40:00.876163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.721 [2024-10-30 10:40:01.005880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.978 [2024-10-30 10:40:01.209557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.978 [2024-10-30 10:40:01.209832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.237 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:40.237 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:40.237 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.237 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:40.237 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.237 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.498 BaseBdev1_malloc 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.498 true 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.498 [2024-10-30 10:40:01.750373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:40.498 [2024-10-30 10:40:01.750456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.498 [2024-10-30 10:40:01.750488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:40.498 [2024-10-30 10:40:01.750505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.498 [2024-10-30 10:40:01.753301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.498 [2024-10-30 10:40:01.753353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:40.498 BaseBdev1 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.498 BaseBdev2_malloc 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.498 true 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.498 [2024-10-30 10:40:01.816487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:40.498 [2024-10-30 10:40:01.816748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.498 [2024-10-30 10:40:01.816913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:40.498 [2024-10-30 10:40:01.817106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.498 [2024-10-30 10:40:01.820910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.498 [2024-10-30 10:40:01.821149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:40.498 BaseBdev2 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.498 [2024-10-30 10:40:01.829607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.498 [2024-10-30 10:40:01.832767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.498 [2024-10-30 10:40:01.833289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:40.498 [2024-10-30 10:40:01.833334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:40.498 [2024-10-30 10:40:01.833727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:40.498 [2024-10-30 10:40:01.834127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:40.498 [2024-10-30 10:40:01.834160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:40.498 [2024-10-30 10:40:01.834512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.498 "name": "raid_bdev1", 00:11:40.498 "uuid": "c62f23d4-60ad-49ef-a2ad-2719f48609b6", 00:11:40.498 "strip_size_kb": 64, 00:11:40.498 "state": "online", 00:11:40.498 "raid_level": "raid0", 00:11:40.498 "superblock": true, 00:11:40.498 "num_base_bdevs": 2, 00:11:40.498 "num_base_bdevs_discovered": 2, 00:11:40.498 "num_base_bdevs_operational": 2, 00:11:40.498 "base_bdevs_list": [ 00:11:40.498 { 00:11:40.498 "name": "BaseBdev1", 00:11:40.498 "uuid": "3ccb7341-33a1-53a8-96bf-61f3167236f1", 00:11:40.498 "is_configured": true, 00:11:40.498 "data_offset": 2048, 00:11:40.498 "data_size": 63488 00:11:40.498 }, 00:11:40.498 { 00:11:40.498 "name": "BaseBdev2", 00:11:40.498 "uuid": "f9c56aed-7462-59c9-8880-e4f21787aee1", 00:11:40.498 "is_configured": true, 00:11:40.498 "data_offset": 2048, 00:11:40.498 "data_size": 63488 00:11:40.498 } 00:11:40.498 ] 00:11:40.498 }' 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.498 10:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.065 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:41.065 10:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:41.065 [2024-10-30 10:40:02.455955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.001 "name": "raid_bdev1", 00:11:42.001 "uuid": "c62f23d4-60ad-49ef-a2ad-2719f48609b6", 00:11:42.001 "strip_size_kb": 64, 00:11:42.001 "state": "online", 00:11:42.001 "raid_level": "raid0", 00:11:42.001 "superblock": true, 00:11:42.001 "num_base_bdevs": 2, 00:11:42.001 "num_base_bdevs_discovered": 2, 00:11:42.001 "num_base_bdevs_operational": 2, 00:11:42.001 "base_bdevs_list": [ 00:11:42.001 { 00:11:42.001 "name": "BaseBdev1", 00:11:42.001 "uuid": "3ccb7341-33a1-53a8-96bf-61f3167236f1", 00:11:42.001 "is_configured": true, 00:11:42.001 "data_offset": 2048, 00:11:42.001 "data_size": 63488 00:11:42.001 }, 00:11:42.001 { 00:11:42.001 "name": "BaseBdev2", 00:11:42.001 "uuid": "f9c56aed-7462-59c9-8880-e4f21787aee1", 00:11:42.001 "is_configured": true, 00:11:42.001 "data_offset": 2048, 00:11:42.001 "data_size": 63488 00:11:42.001 } 00:11:42.001 ] 00:11:42.001 }' 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.001 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.568 [2024-10-30 10:40:03.854635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.568 [2024-10-30 10:40:03.854678] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.568 [2024-10-30 10:40:03.858077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.568 [2024-10-30 10:40:03.858139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.568 [2024-10-30 10:40:03.858185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.568 [2024-10-30 10:40:03.858204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:42.568 { 00:11:42.568 "results": [ 00:11:42.568 { 00:11:42.568 "job": "raid_bdev1", 00:11:42.568 "core_mask": "0x1", 00:11:42.568 "workload": "randrw", 00:11:42.568 "percentage": 50, 00:11:42.568 "status": "finished", 00:11:42.568 "queue_depth": 1, 00:11:42.568 "io_size": 131072, 00:11:42.568 "runtime": 1.396298, 00:11:42.568 "iops": 10626.67138390229, 00:11:42.568 "mibps": 1328.3339229877863, 00:11:42.568 "io_failed": 1, 00:11:42.568 "io_timeout": 0, 00:11:42.568 "avg_latency_us": 131.13454104356455, 00:11:42.568 "min_latency_us": 43.28727272727273, 00:11:42.568 "max_latency_us": 1891.6072727272726 00:11:42.568 } 00:11:42.568 ], 00:11:42.568 "core_count": 1 00:11:42.568 } 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61550 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61550 ']' 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61550 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61550 00:11:42.568 killing process with pid 61550 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61550' 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61550 00:11:42.568 [2024-10-30 10:40:03.904306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.568 10:40:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61550 00:11:42.827 [2024-10-30 10:40:04.053149] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XHkTZYJFYp 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:43.765 ************************************ 00:11:43.765 END TEST raid_read_error_test 00:11:43.765 ************************************ 00:11:43.765 00:11:43.765 real 0m4.591s 00:11:43.765 user 0m5.724s 00:11:43.765 sys 0m0.568s 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:43.765 10:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.765 10:40:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:11:43.765 10:40:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:43.765 10:40:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.765 10:40:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.765 ************************************ 00:11:43.765 START TEST raid_write_error_test 00:11:43.765 ************************************ 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N7RhJrfwbo 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61696 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61696 00:11:43.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61696 ']' 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:43.765 10:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.024 [2024-10-30 10:40:05.327820] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:44.024 [2024-10-30 10:40:05.328021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 00:11:44.282 [2024-10-30 10:40:05.518051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.282 [2024-10-30 10:40:05.674624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.540 [2024-10-30 10:40:05.892691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.540 [2024-10-30 10:40:05.892737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.140 BaseBdev1_malloc 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.140 true 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.140 [2024-10-30 10:40:06.404228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:45.140 [2024-10-30 10:40:06.404300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.140 [2024-10-30 10:40:06.404329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:45.140 [2024-10-30 10:40:06.404348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.140 [2024-10-30 10:40:06.407238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.140 [2024-10-30 10:40:06.407292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.140 BaseBdev1 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.140 BaseBdev2_malloc 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.140 true 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.140 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.140 [2024-10-30 10:40:06.468561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.140 [2024-10-30 10:40:06.468817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.140 [2024-10-30 10:40:06.468920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:45.141 [2024-10-30 10:40:06.469133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.141 [2024-10-30 10:40:06.472098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.141 [2024-10-30 10:40:06.472153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.141 BaseBdev2 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.141 [2024-10-30 10:40:06.476814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.141 [2024-10-30 10:40:06.479298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.141 [2024-10-30 10:40:06.479582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:45.141 [2024-10-30 10:40:06.479610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:45.141 [2024-10-30 10:40:06.479901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:45.141 [2024-10-30 10:40:06.480160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:45.141 [2024-10-30 10:40:06.480188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:45.141 [2024-10-30 10:40:06.480394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.141 "name": "raid_bdev1", 00:11:45.141 "uuid": "324a4cfb-10b1-4453-a424-99b230dc84dc", 00:11:45.141 "strip_size_kb": 64, 00:11:45.141 "state": "online", 00:11:45.141 "raid_level": "raid0", 00:11:45.141 "superblock": true, 00:11:45.141 "num_base_bdevs": 2, 00:11:45.141 "num_base_bdevs_discovered": 2, 00:11:45.141 "num_base_bdevs_operational": 2, 00:11:45.141 "base_bdevs_list": [ 00:11:45.141 { 00:11:45.141 "name": "BaseBdev1", 00:11:45.141 "uuid": "f1d21bab-6c55-5a4b-89e2-232962be2aea", 00:11:45.141 "is_configured": true, 00:11:45.141 "data_offset": 2048, 00:11:45.141 "data_size": 63488 00:11:45.141 }, 00:11:45.141 { 00:11:45.141 "name": "BaseBdev2", 00:11:45.141 "uuid": "252e329d-3d74-5571-b21a-7989d0a247f5", 00:11:45.141 "is_configured": true, 00:11:45.141 "data_offset": 2048, 00:11:45.141 "data_size": 63488 00:11:45.141 } 00:11:45.141 ] 00:11:45.141 }' 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.141 10:40:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.707 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:45.707 10:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:45.707 [2024-10-30 10:40:07.138366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.640 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.640 "name": "raid_bdev1", 00:11:46.640 "uuid": "324a4cfb-10b1-4453-a424-99b230dc84dc", 00:11:46.640 "strip_size_kb": 64, 00:11:46.640 "state": "online", 00:11:46.640 "raid_level": "raid0", 00:11:46.640 "superblock": true, 00:11:46.640 "num_base_bdevs": 2, 00:11:46.640 "num_base_bdevs_discovered": 2, 00:11:46.640 "num_base_bdevs_operational": 2, 00:11:46.640 "base_bdevs_list": [ 00:11:46.640 { 00:11:46.640 "name": "BaseBdev1", 00:11:46.640 "uuid": "f1d21bab-6c55-5a4b-89e2-232962be2aea", 00:11:46.640 "is_configured": true, 00:11:46.641 "data_offset": 2048, 00:11:46.641 "data_size": 63488 00:11:46.641 }, 00:11:46.641 { 00:11:46.641 "name": "BaseBdev2", 00:11:46.641 "uuid": "252e329d-3d74-5571-b21a-7989d0a247f5", 00:11:46.641 "is_configured": true, 00:11:46.641 "data_offset": 2048, 00:11:46.641 "data_size": 63488 00:11:46.641 } 00:11:46.641 ] 00:11:46.641 }' 00:11:46.641 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.641 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.241 [2024-10-30 10:40:08.540551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.241 [2024-10-30 10:40:08.540594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.241 [2024-10-30 10:40:08.543912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.241 [2024-10-30 10:40:08.543969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.241 [2024-10-30 10:40:08.544032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.241 [2024-10-30 10:40:08.544052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:47.241 { 00:11:47.241 "results": [ 00:11:47.241 { 00:11:47.241 "job": "raid_bdev1", 00:11:47.241 "core_mask": "0x1", 00:11:47.241 "workload": "randrw", 00:11:47.241 "percentage": 50, 00:11:47.241 "status": "finished", 00:11:47.241 "queue_depth": 1, 00:11:47.241 "io_size": 131072, 00:11:47.241 "runtime": 1.399605, 00:11:47.241 "iops": 10981.669828272978, 00:11:47.241 "mibps": 1372.7087285341222, 00:11:47.241 "io_failed": 1, 00:11:47.241 "io_timeout": 0, 00:11:47.241 "avg_latency_us": 126.91444408301348, 00:11:47.241 "min_latency_us": 42.589090909090906, 00:11:47.241 "max_latency_us": 1869.2654545454545 00:11:47.241 } 00:11:47.241 ], 00:11:47.241 "core_count": 1 00:11:47.241 } 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61696 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61696 ']' 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61696 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61696 00:11:47.241 killing process with pid 61696 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61696' 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61696 00:11:47.241 [2024-10-30 10:40:08.583402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.241 10:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61696 00:11:47.241 [2024-10-30 10:40:08.704746] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N7RhJrfwbo 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:48.614 ************************************ 00:11:48.614 END TEST raid_write_error_test 00:11:48.614 ************************************ 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:48.614 00:11:48.614 real 0m4.579s 00:11:48.614 user 0m5.761s 00:11:48.614 sys 0m0.577s 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:48.614 10:40:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.614 10:40:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:48.614 10:40:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:11:48.614 10:40:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:48.614 10:40:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:48.614 10:40:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.614 ************************************ 00:11:48.614 START TEST raid_state_function_test 00:11:48.614 ************************************ 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61834 00:11:48.614 Process raid pid: 61834 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61834' 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61834 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61834 ']' 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:48.614 10:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.614 [2024-10-30 10:40:09.956621] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:48.614 [2024-10-30 10:40:09.956808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.872 [2024-10-30 10:40:10.148065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.872 [2024-10-30 10:40:10.303998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.130 [2024-10-30 10:40:10.520665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.130 [2024-10-30 10:40:10.520940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.698 [2024-10-30 10:40:11.024519] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.698 [2024-10-30 10:40:11.024584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.698 [2024-10-30 10:40:11.024601] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.698 [2024-10-30 10:40:11.024618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.698 "name": "Existed_Raid", 00:11:49.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.698 "strip_size_kb": 64, 00:11:49.698 "state": "configuring", 00:11:49.698 "raid_level": "concat", 00:11:49.698 "superblock": false, 00:11:49.698 "num_base_bdevs": 2, 00:11:49.698 "num_base_bdevs_discovered": 0, 00:11:49.698 "num_base_bdevs_operational": 2, 00:11:49.698 "base_bdevs_list": [ 00:11:49.698 { 00:11:49.698 "name": "BaseBdev1", 00:11:49.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.698 "is_configured": false, 00:11:49.698 "data_offset": 0, 00:11:49.698 "data_size": 0 00:11:49.698 }, 00:11:49.698 { 00:11:49.698 "name": "BaseBdev2", 00:11:49.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.698 "is_configured": false, 00:11:49.698 "data_offset": 0, 00:11:49.698 "data_size": 0 00:11:49.698 } 00:11:49.698 ] 00:11:49.698 }' 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.698 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.337 [2024-10-30 10:40:11.552588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.337 [2024-10-30 10:40:11.552767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.337 [2024-10-30 10:40:11.560572] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.337 [2024-10-30 10:40:11.560626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.337 [2024-10-30 10:40:11.560642] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.337 [2024-10-30 10:40:11.560661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.337 [2024-10-30 10:40:11.605346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.337 BaseBdev1 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.337 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.337 [ 00:11:50.337 { 00:11:50.337 "name": "BaseBdev1", 00:11:50.337 "aliases": [ 00:11:50.337 "42edefa4-b064-46c4-a679-a18010c17a6d" 00:11:50.337 ], 00:11:50.337 "product_name": "Malloc disk", 00:11:50.337 "block_size": 512, 00:11:50.337 "num_blocks": 65536, 00:11:50.337 "uuid": "42edefa4-b064-46c4-a679-a18010c17a6d", 00:11:50.337 "assigned_rate_limits": { 00:11:50.337 "rw_ios_per_sec": 0, 00:11:50.337 "rw_mbytes_per_sec": 0, 00:11:50.337 "r_mbytes_per_sec": 0, 00:11:50.337 "w_mbytes_per_sec": 0 00:11:50.337 }, 00:11:50.338 "claimed": true, 00:11:50.338 "claim_type": "exclusive_write", 00:11:50.338 "zoned": false, 00:11:50.338 "supported_io_types": { 00:11:50.338 "read": true, 00:11:50.338 "write": true, 00:11:50.338 "unmap": true, 00:11:50.338 "flush": true, 00:11:50.338 "reset": true, 00:11:50.338 "nvme_admin": false, 00:11:50.338 "nvme_io": false, 00:11:50.338 "nvme_io_md": false, 00:11:50.338 "write_zeroes": true, 00:11:50.338 "zcopy": true, 00:11:50.338 "get_zone_info": false, 00:11:50.338 "zone_management": false, 00:11:50.338 "zone_append": false, 00:11:50.338 "compare": false, 00:11:50.338 "compare_and_write": false, 00:11:50.338 "abort": true, 00:11:50.338 "seek_hole": false, 00:11:50.338 "seek_data": false, 00:11:50.338 "copy": true, 00:11:50.338 "nvme_iov_md": false 00:11:50.338 }, 00:11:50.338 "memory_domains": [ 00:11:50.338 { 00:11:50.338 "dma_device_id": "system", 00:11:50.338 "dma_device_type": 1 00:11:50.338 }, 00:11:50.338 { 00:11:50.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.338 "dma_device_type": 2 00:11:50.338 } 00:11:50.338 ], 00:11:50.338 "driver_specific": {} 00:11:50.338 } 00:11:50.338 ] 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.338 "name": "Existed_Raid", 00:11:50.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.338 "strip_size_kb": 64, 00:11:50.338 "state": "configuring", 00:11:50.338 "raid_level": "concat", 00:11:50.338 "superblock": false, 00:11:50.338 "num_base_bdevs": 2, 00:11:50.338 "num_base_bdevs_discovered": 1, 00:11:50.338 "num_base_bdevs_operational": 2, 00:11:50.338 "base_bdevs_list": [ 00:11:50.338 { 00:11:50.338 "name": "BaseBdev1", 00:11:50.338 "uuid": "42edefa4-b064-46c4-a679-a18010c17a6d", 00:11:50.338 "is_configured": true, 00:11:50.338 "data_offset": 0, 00:11:50.338 "data_size": 65536 00:11:50.338 }, 00:11:50.338 { 00:11:50.338 "name": "BaseBdev2", 00:11:50.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.338 "is_configured": false, 00:11:50.338 "data_offset": 0, 00:11:50.338 "data_size": 0 00:11:50.338 } 00:11:50.338 ] 00:11:50.338 }' 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.338 10:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.918 [2024-10-30 10:40:12.189551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.918 [2024-10-30 10:40:12.189612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.918 [2024-10-30 10:40:12.197591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.918 [2024-10-30 10:40:12.200010] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.918 [2024-10-30 10:40:12.200064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.918 "name": "Existed_Raid", 00:11:50.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.918 "strip_size_kb": 64, 00:11:50.918 "state": "configuring", 00:11:50.918 "raid_level": "concat", 00:11:50.918 "superblock": false, 00:11:50.918 "num_base_bdevs": 2, 00:11:50.918 "num_base_bdevs_discovered": 1, 00:11:50.918 "num_base_bdevs_operational": 2, 00:11:50.918 "base_bdevs_list": [ 00:11:50.918 { 00:11:50.918 "name": "BaseBdev1", 00:11:50.918 "uuid": "42edefa4-b064-46c4-a679-a18010c17a6d", 00:11:50.918 "is_configured": true, 00:11:50.918 "data_offset": 0, 00:11:50.918 "data_size": 65536 00:11:50.918 }, 00:11:50.918 { 00:11:50.918 "name": "BaseBdev2", 00:11:50.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.918 "is_configured": false, 00:11:50.918 "data_offset": 0, 00:11:50.918 "data_size": 0 00:11:50.918 } 00:11:50.918 ] 00:11:50.918 }' 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.918 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.485 [2024-10-30 10:40:12.743506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.485 [2024-10-30 10:40:12.743563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:51.485 [2024-10-30 10:40:12.743576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:51.485 [2024-10-30 10:40:12.743898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:51.485 [2024-10-30 10:40:12.744130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:51.485 [2024-10-30 10:40:12.744154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:51.485 BaseBdev2 00:11:51.485 [2024-10-30 10:40:12.744455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:51.485 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.486 [ 00:11:51.486 { 00:11:51.486 "name": "BaseBdev2", 00:11:51.486 "aliases": [ 00:11:51.486 "76c767fb-3ecf-4bcb-b189-7516ff5a7036" 00:11:51.486 ], 00:11:51.486 "product_name": "Malloc disk", 00:11:51.486 "block_size": 512, 00:11:51.486 "num_blocks": 65536, 00:11:51.486 "uuid": "76c767fb-3ecf-4bcb-b189-7516ff5a7036", 00:11:51.486 "assigned_rate_limits": { 00:11:51.486 "rw_ios_per_sec": 0, 00:11:51.486 "rw_mbytes_per_sec": 0, 00:11:51.486 "r_mbytes_per_sec": 0, 00:11:51.486 "w_mbytes_per_sec": 0 00:11:51.486 }, 00:11:51.486 "claimed": true, 00:11:51.486 "claim_type": "exclusive_write", 00:11:51.486 "zoned": false, 00:11:51.486 "supported_io_types": { 00:11:51.486 "read": true, 00:11:51.486 "write": true, 00:11:51.486 "unmap": true, 00:11:51.486 "flush": true, 00:11:51.486 "reset": true, 00:11:51.486 "nvme_admin": false, 00:11:51.486 "nvme_io": false, 00:11:51.486 "nvme_io_md": false, 00:11:51.486 "write_zeroes": true, 00:11:51.486 "zcopy": true, 00:11:51.486 "get_zone_info": false, 00:11:51.486 "zone_management": false, 00:11:51.486 "zone_append": false, 00:11:51.486 "compare": false, 00:11:51.486 "compare_and_write": false, 00:11:51.486 "abort": true, 00:11:51.486 "seek_hole": false, 00:11:51.486 "seek_data": false, 00:11:51.486 "copy": true, 00:11:51.486 "nvme_iov_md": false 00:11:51.486 }, 00:11:51.486 "memory_domains": [ 00:11:51.486 { 00:11:51.486 "dma_device_id": "system", 00:11:51.486 "dma_device_type": 1 00:11:51.486 }, 00:11:51.486 { 00:11:51.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.486 "dma_device_type": 2 00:11:51.486 } 00:11:51.486 ], 00:11:51.486 "driver_specific": {} 00:11:51.486 } 00:11:51.486 ] 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.486 "name": "Existed_Raid", 00:11:51.486 "uuid": "92ad7c47-8c41-4d1b-b51f-acec634c3df0", 00:11:51.486 "strip_size_kb": 64, 00:11:51.486 "state": "online", 00:11:51.486 "raid_level": "concat", 00:11:51.486 "superblock": false, 00:11:51.486 "num_base_bdevs": 2, 00:11:51.486 "num_base_bdevs_discovered": 2, 00:11:51.486 "num_base_bdevs_operational": 2, 00:11:51.486 "base_bdevs_list": [ 00:11:51.486 { 00:11:51.486 "name": "BaseBdev1", 00:11:51.486 "uuid": "42edefa4-b064-46c4-a679-a18010c17a6d", 00:11:51.486 "is_configured": true, 00:11:51.486 "data_offset": 0, 00:11:51.486 "data_size": 65536 00:11:51.486 }, 00:11:51.486 { 00:11:51.486 "name": "BaseBdev2", 00:11:51.486 "uuid": "76c767fb-3ecf-4bcb-b189-7516ff5a7036", 00:11:51.486 "is_configured": true, 00:11:51.486 "data_offset": 0, 00:11:51.486 "data_size": 65536 00:11:51.486 } 00:11:51.486 ] 00:11:51.486 }' 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.486 10:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.054 [2024-10-30 10:40:13.296055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:52.054 "name": "Existed_Raid", 00:11:52.054 "aliases": [ 00:11:52.054 "92ad7c47-8c41-4d1b-b51f-acec634c3df0" 00:11:52.054 ], 00:11:52.054 "product_name": "Raid Volume", 00:11:52.054 "block_size": 512, 00:11:52.054 "num_blocks": 131072, 00:11:52.054 "uuid": "92ad7c47-8c41-4d1b-b51f-acec634c3df0", 00:11:52.054 "assigned_rate_limits": { 00:11:52.054 "rw_ios_per_sec": 0, 00:11:52.054 "rw_mbytes_per_sec": 0, 00:11:52.054 "r_mbytes_per_sec": 0, 00:11:52.054 "w_mbytes_per_sec": 0 00:11:52.054 }, 00:11:52.054 "claimed": false, 00:11:52.054 "zoned": false, 00:11:52.054 "supported_io_types": { 00:11:52.054 "read": true, 00:11:52.054 "write": true, 00:11:52.054 "unmap": true, 00:11:52.054 "flush": true, 00:11:52.054 "reset": true, 00:11:52.054 "nvme_admin": false, 00:11:52.054 "nvme_io": false, 00:11:52.054 "nvme_io_md": false, 00:11:52.054 "write_zeroes": true, 00:11:52.054 "zcopy": false, 00:11:52.054 "get_zone_info": false, 00:11:52.054 "zone_management": false, 00:11:52.054 "zone_append": false, 00:11:52.054 "compare": false, 00:11:52.054 "compare_and_write": false, 00:11:52.054 "abort": false, 00:11:52.054 "seek_hole": false, 00:11:52.054 "seek_data": false, 00:11:52.054 "copy": false, 00:11:52.054 "nvme_iov_md": false 00:11:52.054 }, 00:11:52.054 "memory_domains": [ 00:11:52.054 { 00:11:52.054 "dma_device_id": "system", 00:11:52.054 "dma_device_type": 1 00:11:52.054 }, 00:11:52.054 { 00:11:52.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.054 "dma_device_type": 2 00:11:52.054 }, 00:11:52.054 { 00:11:52.054 "dma_device_id": "system", 00:11:52.054 "dma_device_type": 1 00:11:52.054 }, 00:11:52.054 { 00:11:52.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.054 "dma_device_type": 2 00:11:52.054 } 00:11:52.054 ], 00:11:52.054 "driver_specific": { 00:11:52.054 "raid": { 00:11:52.054 "uuid": "92ad7c47-8c41-4d1b-b51f-acec634c3df0", 00:11:52.054 "strip_size_kb": 64, 00:11:52.054 "state": "online", 00:11:52.054 "raid_level": "concat", 00:11:52.054 "superblock": false, 00:11:52.054 "num_base_bdevs": 2, 00:11:52.054 "num_base_bdevs_discovered": 2, 00:11:52.054 "num_base_bdevs_operational": 2, 00:11:52.054 "base_bdevs_list": [ 00:11:52.054 { 00:11:52.054 "name": "BaseBdev1", 00:11:52.054 "uuid": "42edefa4-b064-46c4-a679-a18010c17a6d", 00:11:52.054 "is_configured": true, 00:11:52.054 "data_offset": 0, 00:11:52.054 "data_size": 65536 00:11:52.054 }, 00:11:52.054 { 00:11:52.054 "name": "BaseBdev2", 00:11:52.054 "uuid": "76c767fb-3ecf-4bcb-b189-7516ff5a7036", 00:11:52.054 "is_configured": true, 00:11:52.054 "data_offset": 0, 00:11:52.054 "data_size": 65536 00:11:52.054 } 00:11:52.054 ] 00:11:52.054 } 00:11:52.054 } 00:11:52.054 }' 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:52.054 BaseBdev2' 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.054 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:52.055 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.055 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.055 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.055 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.314 [2024-10-30 10:40:13.547802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:52.314 [2024-10-30 10:40:13.547843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.314 [2024-10-30 10:40:13.547907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:52.314 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.315 "name": "Existed_Raid", 00:11:52.315 "uuid": "92ad7c47-8c41-4d1b-b51f-acec634c3df0", 00:11:52.315 "strip_size_kb": 64, 00:11:52.315 "state": "offline", 00:11:52.315 "raid_level": "concat", 00:11:52.315 "superblock": false, 00:11:52.315 "num_base_bdevs": 2, 00:11:52.315 "num_base_bdevs_discovered": 1, 00:11:52.315 "num_base_bdevs_operational": 1, 00:11:52.315 "base_bdevs_list": [ 00:11:52.315 { 00:11:52.315 "name": null, 00:11:52.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.315 "is_configured": false, 00:11:52.315 "data_offset": 0, 00:11:52.315 "data_size": 65536 00:11:52.315 }, 00:11:52.315 { 00:11:52.315 "name": "BaseBdev2", 00:11:52.315 "uuid": "76c767fb-3ecf-4bcb-b189-7516ff5a7036", 00:11:52.315 "is_configured": true, 00:11:52.315 "data_offset": 0, 00:11:52.315 "data_size": 65536 00:11:52.315 } 00:11:52.315 ] 00:11:52.315 }' 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.315 10:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 [2024-10-30 10:40:14.240873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:52.883 [2024-10-30 10:40:14.241119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:52.883 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61834 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61834 ']' 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61834 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61834 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61834' 00:11:53.142 killing process with pid 61834 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61834 00:11:53.142 [2024-10-30 10:40:14.417937] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.142 10:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61834 00:11:53.142 [2024-10-30 10:40:14.432761] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:54.077 00:11:54.077 real 0m5.623s 00:11:54.077 user 0m8.534s 00:11:54.077 sys 0m0.806s 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.077 ************************************ 00:11:54.077 END TEST raid_state_function_test 00:11:54.077 ************************************ 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.077 10:40:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:11:54.077 10:40:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:54.077 10:40:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.077 10:40:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.077 ************************************ 00:11:54.077 START TEST raid_state_function_test_sb 00:11:54.077 ************************************ 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:54.077 Process raid pid: 62098 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62098 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62098' 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62098 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62098 ']' 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:54.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.077 10:40:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:54.336 [2024-10-30 10:40:15.651079] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:54.336 [2024-10-30 10:40:15.652284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.593 [2024-10-30 10:40:15.845797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.593 [2024-10-30 10:40:15.973497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.852 [2024-10-30 10:40:16.178475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.852 [2024-10-30 10:40:16.178525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.110 10:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:55.110 10:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:55.110 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:55.110 10:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.110 10:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.368 [2024-10-30 10:40:16.583726] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.368 [2024-10-30 10:40:16.583816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.368 [2024-10-30 10:40:16.583833] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.368 [2024-10-30 10:40:16.583849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.368 "name": "Existed_Raid", 00:11:55.368 "uuid": "467684e9-ee62-476a-8e32-95f8f30c01c3", 00:11:55.368 "strip_size_kb": 64, 00:11:55.368 "state": "configuring", 00:11:55.368 "raid_level": "concat", 00:11:55.368 "superblock": true, 00:11:55.368 "num_base_bdevs": 2, 00:11:55.368 "num_base_bdevs_discovered": 0, 00:11:55.368 "num_base_bdevs_operational": 2, 00:11:55.368 "base_bdevs_list": [ 00:11:55.368 { 00:11:55.368 "name": "BaseBdev1", 00:11:55.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.368 "is_configured": false, 00:11:55.368 "data_offset": 0, 00:11:55.368 "data_size": 0 00:11:55.368 }, 00:11:55.368 { 00:11:55.368 "name": "BaseBdev2", 00:11:55.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.368 "is_configured": false, 00:11:55.368 "data_offset": 0, 00:11:55.368 "data_size": 0 00:11:55.368 } 00:11:55.368 ] 00:11:55.368 }' 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.368 10:40:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.626 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.626 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.627 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.627 [2024-10-30 10:40:17.079788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.627 [2024-10-30 10:40:17.079829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:55.627 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.627 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:55.627 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.627 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.627 [2024-10-30 10:40:17.087775] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.627 [2024-10-30 10:40:17.087827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.627 [2024-10-30 10:40:17.087841] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.627 [2024-10-30 10:40:17.087860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.627 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.627 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:55.627 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.627 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.885 [2024-10-30 10:40:17.131884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.885 BaseBdev1 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.885 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.885 [ 00:11:55.885 { 00:11:55.885 "name": "BaseBdev1", 00:11:55.885 "aliases": [ 00:11:55.885 "516d1b86-1ec4-490e-9892-4bb1c2abae84" 00:11:55.885 ], 00:11:55.885 "product_name": "Malloc disk", 00:11:55.885 "block_size": 512, 00:11:55.885 "num_blocks": 65536, 00:11:55.885 "uuid": "516d1b86-1ec4-490e-9892-4bb1c2abae84", 00:11:55.885 "assigned_rate_limits": { 00:11:55.885 "rw_ios_per_sec": 0, 00:11:55.885 "rw_mbytes_per_sec": 0, 00:11:55.885 "r_mbytes_per_sec": 0, 00:11:55.885 "w_mbytes_per_sec": 0 00:11:55.885 }, 00:11:55.885 "claimed": true, 00:11:55.885 "claim_type": "exclusive_write", 00:11:55.885 "zoned": false, 00:11:55.885 "supported_io_types": { 00:11:55.885 "read": true, 00:11:55.885 "write": true, 00:11:55.885 "unmap": true, 00:11:55.885 "flush": true, 00:11:55.885 "reset": true, 00:11:55.885 "nvme_admin": false, 00:11:55.885 "nvme_io": false, 00:11:55.885 "nvme_io_md": false, 00:11:55.885 "write_zeroes": true, 00:11:55.885 "zcopy": true, 00:11:55.885 "get_zone_info": false, 00:11:55.885 "zone_management": false, 00:11:55.886 "zone_append": false, 00:11:55.886 "compare": false, 00:11:55.886 "compare_and_write": false, 00:11:55.886 "abort": true, 00:11:55.886 "seek_hole": false, 00:11:55.886 "seek_data": false, 00:11:55.886 "copy": true, 00:11:55.886 "nvme_iov_md": false 00:11:55.886 }, 00:11:55.886 "memory_domains": [ 00:11:55.886 { 00:11:55.886 "dma_device_id": "system", 00:11:55.886 "dma_device_type": 1 00:11:55.886 }, 00:11:55.886 { 00:11:55.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.886 "dma_device_type": 2 00:11:55.886 } 00:11:55.886 ], 00:11:55.886 "driver_specific": {} 00:11:55.886 } 00:11:55.886 ] 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.886 "name": "Existed_Raid", 00:11:55.886 "uuid": "50350c7b-1417-4856-bad2-f0d319c9732d", 00:11:55.886 "strip_size_kb": 64, 00:11:55.886 "state": "configuring", 00:11:55.886 "raid_level": "concat", 00:11:55.886 "superblock": true, 00:11:55.886 "num_base_bdevs": 2, 00:11:55.886 "num_base_bdevs_discovered": 1, 00:11:55.886 "num_base_bdevs_operational": 2, 00:11:55.886 "base_bdevs_list": [ 00:11:55.886 { 00:11:55.886 "name": "BaseBdev1", 00:11:55.886 "uuid": "516d1b86-1ec4-490e-9892-4bb1c2abae84", 00:11:55.886 "is_configured": true, 00:11:55.886 "data_offset": 2048, 00:11:55.886 "data_size": 63488 00:11:55.886 }, 00:11:55.886 { 00:11:55.886 "name": "BaseBdev2", 00:11:55.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.886 "is_configured": false, 00:11:55.886 "data_offset": 0, 00:11:55.886 "data_size": 0 00:11:55.886 } 00:11:55.886 ] 00:11:55.886 }' 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.886 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.453 [2024-10-30 10:40:17.676094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.453 [2024-10-30 10:40:17.676287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.453 [2024-10-30 10:40:17.684144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.453 [2024-10-30 10:40:17.686629] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.453 [2024-10-30 10:40:17.686676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.453 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.454 "name": "Existed_Raid", 00:11:56.454 "uuid": "9db71b5f-3114-476a-9d71-6edb6bcaee67", 00:11:56.454 "strip_size_kb": 64, 00:11:56.454 "state": "configuring", 00:11:56.454 "raid_level": "concat", 00:11:56.454 "superblock": true, 00:11:56.454 "num_base_bdevs": 2, 00:11:56.454 "num_base_bdevs_discovered": 1, 00:11:56.454 "num_base_bdevs_operational": 2, 00:11:56.454 "base_bdevs_list": [ 00:11:56.454 { 00:11:56.454 "name": "BaseBdev1", 00:11:56.454 "uuid": "516d1b86-1ec4-490e-9892-4bb1c2abae84", 00:11:56.454 "is_configured": true, 00:11:56.454 "data_offset": 2048, 00:11:56.454 "data_size": 63488 00:11:56.454 }, 00:11:56.454 { 00:11:56.454 "name": "BaseBdev2", 00:11:56.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.454 "is_configured": false, 00:11:56.454 "data_offset": 0, 00:11:56.454 "data_size": 0 00:11:56.454 } 00:11:56.454 ] 00:11:56.454 }' 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.454 10:40:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.020 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.020 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.020 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.020 [2024-10-30 10:40:18.274688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.021 [2024-10-30 10:40:18.275004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:57.021 [2024-10-30 10:40:18.275024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:57.021 BaseBdev2 00:11:57.021 [2024-10-30 10:40:18.275358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:57.021 [2024-10-30 10:40:18.275577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:57.021 [2024-10-30 10:40:18.275598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:57.021 [2024-10-30 10:40:18.275766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.021 [ 00:11:57.021 { 00:11:57.021 "name": "BaseBdev2", 00:11:57.021 "aliases": [ 00:11:57.021 "c0dfa330-05fd-41db-8737-bd043e9ea719" 00:11:57.021 ], 00:11:57.021 "product_name": "Malloc disk", 00:11:57.021 "block_size": 512, 00:11:57.021 "num_blocks": 65536, 00:11:57.021 "uuid": "c0dfa330-05fd-41db-8737-bd043e9ea719", 00:11:57.021 "assigned_rate_limits": { 00:11:57.021 "rw_ios_per_sec": 0, 00:11:57.021 "rw_mbytes_per_sec": 0, 00:11:57.021 "r_mbytes_per_sec": 0, 00:11:57.021 "w_mbytes_per_sec": 0 00:11:57.021 }, 00:11:57.021 "claimed": true, 00:11:57.021 "claim_type": "exclusive_write", 00:11:57.021 "zoned": false, 00:11:57.021 "supported_io_types": { 00:11:57.021 "read": true, 00:11:57.021 "write": true, 00:11:57.021 "unmap": true, 00:11:57.021 "flush": true, 00:11:57.021 "reset": true, 00:11:57.021 "nvme_admin": false, 00:11:57.021 "nvme_io": false, 00:11:57.021 "nvme_io_md": false, 00:11:57.021 "write_zeroes": true, 00:11:57.021 "zcopy": true, 00:11:57.021 "get_zone_info": false, 00:11:57.021 "zone_management": false, 00:11:57.021 "zone_append": false, 00:11:57.021 "compare": false, 00:11:57.021 "compare_and_write": false, 00:11:57.021 "abort": true, 00:11:57.021 "seek_hole": false, 00:11:57.021 "seek_data": false, 00:11:57.021 "copy": true, 00:11:57.021 "nvme_iov_md": false 00:11:57.021 }, 00:11:57.021 "memory_domains": [ 00:11:57.021 { 00:11:57.021 "dma_device_id": "system", 00:11:57.021 "dma_device_type": 1 00:11:57.021 }, 00:11:57.021 { 00:11:57.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.021 "dma_device_type": 2 00:11:57.021 } 00:11:57.021 ], 00:11:57.021 "driver_specific": {} 00:11:57.021 } 00:11:57.021 ] 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.021 "name": "Existed_Raid", 00:11:57.021 "uuid": "9db71b5f-3114-476a-9d71-6edb6bcaee67", 00:11:57.021 "strip_size_kb": 64, 00:11:57.021 "state": "online", 00:11:57.021 "raid_level": "concat", 00:11:57.021 "superblock": true, 00:11:57.021 "num_base_bdevs": 2, 00:11:57.021 "num_base_bdevs_discovered": 2, 00:11:57.021 "num_base_bdevs_operational": 2, 00:11:57.021 "base_bdevs_list": [ 00:11:57.021 { 00:11:57.021 "name": "BaseBdev1", 00:11:57.021 "uuid": "516d1b86-1ec4-490e-9892-4bb1c2abae84", 00:11:57.021 "is_configured": true, 00:11:57.021 "data_offset": 2048, 00:11:57.021 "data_size": 63488 00:11:57.021 }, 00:11:57.021 { 00:11:57.021 "name": "BaseBdev2", 00:11:57.021 "uuid": "c0dfa330-05fd-41db-8737-bd043e9ea719", 00:11:57.021 "is_configured": true, 00:11:57.021 "data_offset": 2048, 00:11:57.021 "data_size": 63488 00:11:57.021 } 00:11:57.021 ] 00:11:57.021 }' 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.021 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.586 [2024-10-30 10:40:18.843275] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.586 "name": "Existed_Raid", 00:11:57.586 "aliases": [ 00:11:57.586 "9db71b5f-3114-476a-9d71-6edb6bcaee67" 00:11:57.586 ], 00:11:57.586 "product_name": "Raid Volume", 00:11:57.586 "block_size": 512, 00:11:57.586 "num_blocks": 126976, 00:11:57.586 "uuid": "9db71b5f-3114-476a-9d71-6edb6bcaee67", 00:11:57.586 "assigned_rate_limits": { 00:11:57.586 "rw_ios_per_sec": 0, 00:11:57.586 "rw_mbytes_per_sec": 0, 00:11:57.586 "r_mbytes_per_sec": 0, 00:11:57.586 "w_mbytes_per_sec": 0 00:11:57.586 }, 00:11:57.586 "claimed": false, 00:11:57.586 "zoned": false, 00:11:57.586 "supported_io_types": { 00:11:57.586 "read": true, 00:11:57.586 "write": true, 00:11:57.586 "unmap": true, 00:11:57.586 "flush": true, 00:11:57.586 "reset": true, 00:11:57.586 "nvme_admin": false, 00:11:57.586 "nvme_io": false, 00:11:57.586 "nvme_io_md": false, 00:11:57.586 "write_zeroes": true, 00:11:57.586 "zcopy": false, 00:11:57.586 "get_zone_info": false, 00:11:57.586 "zone_management": false, 00:11:57.586 "zone_append": false, 00:11:57.586 "compare": false, 00:11:57.586 "compare_and_write": false, 00:11:57.586 "abort": false, 00:11:57.586 "seek_hole": false, 00:11:57.586 "seek_data": false, 00:11:57.586 "copy": false, 00:11:57.586 "nvme_iov_md": false 00:11:57.586 }, 00:11:57.586 "memory_domains": [ 00:11:57.586 { 00:11:57.586 "dma_device_id": "system", 00:11:57.586 "dma_device_type": 1 00:11:57.586 }, 00:11:57.586 { 00:11:57.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.586 "dma_device_type": 2 00:11:57.586 }, 00:11:57.586 { 00:11:57.586 "dma_device_id": "system", 00:11:57.586 "dma_device_type": 1 00:11:57.586 }, 00:11:57.586 { 00:11:57.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.586 "dma_device_type": 2 00:11:57.586 } 00:11:57.586 ], 00:11:57.586 "driver_specific": { 00:11:57.586 "raid": { 00:11:57.586 "uuid": "9db71b5f-3114-476a-9d71-6edb6bcaee67", 00:11:57.586 "strip_size_kb": 64, 00:11:57.586 "state": "online", 00:11:57.586 "raid_level": "concat", 00:11:57.586 "superblock": true, 00:11:57.586 "num_base_bdevs": 2, 00:11:57.586 "num_base_bdevs_discovered": 2, 00:11:57.586 "num_base_bdevs_operational": 2, 00:11:57.586 "base_bdevs_list": [ 00:11:57.586 { 00:11:57.586 "name": "BaseBdev1", 00:11:57.586 "uuid": "516d1b86-1ec4-490e-9892-4bb1c2abae84", 00:11:57.586 "is_configured": true, 00:11:57.586 "data_offset": 2048, 00:11:57.586 "data_size": 63488 00:11:57.586 }, 00:11:57.586 { 00:11:57.586 "name": "BaseBdev2", 00:11:57.586 "uuid": "c0dfa330-05fd-41db-8737-bd043e9ea719", 00:11:57.586 "is_configured": true, 00:11:57.586 "data_offset": 2048, 00:11:57.586 "data_size": 63488 00:11:57.586 } 00:11:57.586 ] 00:11:57.586 } 00:11:57.586 } 00:11:57.586 }' 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:57.586 BaseBdev2' 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.586 10:40:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.586 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.586 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.586 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.586 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:57.586 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.586 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.586 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.845 [2024-10-30 10:40:19.095044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.845 [2024-10-30 10:40:19.095085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.845 [2024-10-30 10:40:19.095150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.845 "name": "Existed_Raid", 00:11:57.845 "uuid": "9db71b5f-3114-476a-9d71-6edb6bcaee67", 00:11:57.845 "strip_size_kb": 64, 00:11:57.845 "state": "offline", 00:11:57.845 "raid_level": "concat", 00:11:57.845 "superblock": true, 00:11:57.845 "num_base_bdevs": 2, 00:11:57.845 "num_base_bdevs_discovered": 1, 00:11:57.845 "num_base_bdevs_operational": 1, 00:11:57.845 "base_bdevs_list": [ 00:11:57.845 { 00:11:57.845 "name": null, 00:11:57.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.845 "is_configured": false, 00:11:57.845 "data_offset": 0, 00:11:57.845 "data_size": 63488 00:11:57.845 }, 00:11:57.845 { 00:11:57.845 "name": "BaseBdev2", 00:11:57.845 "uuid": "c0dfa330-05fd-41db-8737-bd043e9ea719", 00:11:57.845 "is_configured": true, 00:11:57.845 "data_offset": 2048, 00:11:57.845 "data_size": 63488 00:11:57.845 } 00:11:57.845 ] 00:11:57.845 }' 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.845 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 [2024-10-30 10:40:19.772340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.415 [2024-10-30 10:40:19.772529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:58.415 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62098 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62098 ']' 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62098 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62098 00:11:58.674 killing process with pid 62098 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62098' 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62098 00:11:58.674 [2024-10-30 10:40:19.951034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.674 10:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62098 00:11:58.674 [2024-10-30 10:40:19.965655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.608 10:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:59.608 00:11:59.608 real 0m5.465s 00:11:59.608 user 0m8.281s 00:11:59.608 sys 0m0.760s 00:11:59.608 10:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:59.608 10:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.608 ************************************ 00:11:59.608 END TEST raid_state_function_test_sb 00:11:59.608 ************************************ 00:11:59.608 10:40:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:11:59.608 10:40:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:59.608 10:40:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:59.608 10:40:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.608 ************************************ 00:11:59.608 START TEST raid_superblock_test 00:11:59.608 ************************************ 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62350 00:11:59.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62350 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62350 ']' 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:59.608 10:40:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.867 [2024-10-30 10:40:21.141082] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:11:59.867 [2024-10-30 10:40:21.141269] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62350 ] 00:11:59.867 [2024-10-30 10:40:21.331132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.126 [2024-10-30 10:40:21.483229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.384 [2024-10-30 10:40:21.697837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.384 [2024-10-30 10:40:21.697915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.952 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:00.952 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:00.952 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:00.952 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.952 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:00.952 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:00.952 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:00.952 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.953 malloc1 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.953 [2024-10-30 10:40:22.225439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:00.953 [2024-10-30 10:40:22.225665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.953 [2024-10-30 10:40:22.225748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:00.953 [2024-10-30 10:40:22.225991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.953 [2024-10-30 10:40:22.228797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.953 [2024-10-30 10:40:22.228991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:00.953 pt1 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.953 malloc2 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.953 [2024-10-30 10:40:22.281259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:00.953 [2024-10-30 10:40:22.281352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.953 [2024-10-30 10:40:22.281398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:00.953 [2024-10-30 10:40:22.281423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.953 [2024-10-30 10:40:22.284956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.953 [2024-10-30 10:40:22.285042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:00.953 pt2 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.953 [2024-10-30 10:40:22.293358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:00.953 [2024-10-30 10:40:22.296404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:00.953 [2024-10-30 10:40:22.296698] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:00.953 [2024-10-30 10:40:22.296722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:00.953 [2024-10-30 10:40:22.297183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:00.953 [2024-10-30 10:40:22.297457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:00.953 [2024-10-30 10:40:22.297488] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:00.953 [2024-10-30 10:40:22.297806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.953 "name": "raid_bdev1", 00:12:00.953 "uuid": "2ea46071-877f-442c-ab0d-5cce56996865", 00:12:00.953 "strip_size_kb": 64, 00:12:00.953 "state": "online", 00:12:00.953 "raid_level": "concat", 00:12:00.953 "superblock": true, 00:12:00.953 "num_base_bdevs": 2, 00:12:00.953 "num_base_bdevs_discovered": 2, 00:12:00.953 "num_base_bdevs_operational": 2, 00:12:00.953 "base_bdevs_list": [ 00:12:00.953 { 00:12:00.953 "name": "pt1", 00:12:00.953 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.953 "is_configured": true, 00:12:00.953 "data_offset": 2048, 00:12:00.953 "data_size": 63488 00:12:00.953 }, 00:12:00.953 { 00:12:00.953 "name": "pt2", 00:12:00.953 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.953 "is_configured": true, 00:12:00.953 "data_offset": 2048, 00:12:00.953 "data_size": 63488 00:12:00.953 } 00:12:00.953 ] 00:12:00.953 }' 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.953 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.520 [2024-10-30 10:40:22.806191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:01.520 "name": "raid_bdev1", 00:12:01.520 "aliases": [ 00:12:01.520 "2ea46071-877f-442c-ab0d-5cce56996865" 00:12:01.520 ], 00:12:01.520 "product_name": "Raid Volume", 00:12:01.520 "block_size": 512, 00:12:01.520 "num_blocks": 126976, 00:12:01.520 "uuid": "2ea46071-877f-442c-ab0d-5cce56996865", 00:12:01.520 "assigned_rate_limits": { 00:12:01.520 "rw_ios_per_sec": 0, 00:12:01.520 "rw_mbytes_per_sec": 0, 00:12:01.520 "r_mbytes_per_sec": 0, 00:12:01.520 "w_mbytes_per_sec": 0 00:12:01.520 }, 00:12:01.520 "claimed": false, 00:12:01.520 "zoned": false, 00:12:01.520 "supported_io_types": { 00:12:01.520 "read": true, 00:12:01.520 "write": true, 00:12:01.520 "unmap": true, 00:12:01.520 "flush": true, 00:12:01.520 "reset": true, 00:12:01.520 "nvme_admin": false, 00:12:01.520 "nvme_io": false, 00:12:01.520 "nvme_io_md": false, 00:12:01.520 "write_zeroes": true, 00:12:01.520 "zcopy": false, 00:12:01.520 "get_zone_info": false, 00:12:01.520 "zone_management": false, 00:12:01.520 "zone_append": false, 00:12:01.520 "compare": false, 00:12:01.520 "compare_and_write": false, 00:12:01.520 "abort": false, 00:12:01.520 "seek_hole": false, 00:12:01.520 "seek_data": false, 00:12:01.520 "copy": false, 00:12:01.520 "nvme_iov_md": false 00:12:01.520 }, 00:12:01.520 "memory_domains": [ 00:12:01.520 { 00:12:01.520 "dma_device_id": "system", 00:12:01.520 "dma_device_type": 1 00:12:01.520 }, 00:12:01.520 { 00:12:01.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.520 "dma_device_type": 2 00:12:01.520 }, 00:12:01.520 { 00:12:01.520 "dma_device_id": "system", 00:12:01.520 "dma_device_type": 1 00:12:01.520 }, 00:12:01.520 { 00:12:01.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.520 "dma_device_type": 2 00:12:01.520 } 00:12:01.520 ], 00:12:01.520 "driver_specific": { 00:12:01.520 "raid": { 00:12:01.520 "uuid": "2ea46071-877f-442c-ab0d-5cce56996865", 00:12:01.520 "strip_size_kb": 64, 00:12:01.520 "state": "online", 00:12:01.520 "raid_level": "concat", 00:12:01.520 "superblock": true, 00:12:01.520 "num_base_bdevs": 2, 00:12:01.520 "num_base_bdevs_discovered": 2, 00:12:01.520 "num_base_bdevs_operational": 2, 00:12:01.520 "base_bdevs_list": [ 00:12:01.520 { 00:12:01.520 "name": "pt1", 00:12:01.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.520 "is_configured": true, 00:12:01.520 "data_offset": 2048, 00:12:01.520 "data_size": 63488 00:12:01.520 }, 00:12:01.520 { 00:12:01.520 "name": "pt2", 00:12:01.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.520 "is_configured": true, 00:12:01.520 "data_offset": 2048, 00:12:01.520 "data_size": 63488 00:12:01.520 } 00:12:01.520 ] 00:12:01.520 } 00:12:01.520 } 00:12:01.520 }' 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:01.520 pt2' 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.520 10:40:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:01.779 [2024-10-30 10:40:23.106259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2ea46071-877f-442c-ab0d-5cce56996865 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2ea46071-877f-442c-ab0d-5cce56996865 ']' 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.779 [2024-10-30 10:40:23.153878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.779 [2024-10-30 10:40:23.154033] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.779 [2024-10-30 10:40:23.154266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.779 [2024-10-30 10:40:23.154449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.779 [2024-10-30 10:40:23.154606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.779 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.038 [2024-10-30 10:40:23.293932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:02.038 [2024-10-30 10:40:23.296395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:02.038 [2024-10-30 10:40:23.296496] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:02.038 [2024-10-30 10:40:23.296574] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:02.038 [2024-10-30 10:40:23.296601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.038 [2024-10-30 10:40:23.296617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:02.038 request: 00:12:02.038 { 00:12:02.038 "name": "raid_bdev1", 00:12:02.038 "raid_level": "concat", 00:12:02.038 "base_bdevs": [ 00:12:02.038 "malloc1", 00:12:02.038 "malloc2" 00:12:02.038 ], 00:12:02.038 "strip_size_kb": 64, 00:12:02.038 "superblock": false, 00:12:02.038 "method": "bdev_raid_create", 00:12:02.038 "req_id": 1 00:12:02.038 } 00:12:02.038 Got JSON-RPC error response 00:12:02.038 response: 00:12:02.038 { 00:12:02.038 "code": -17, 00:12:02.038 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:02.038 } 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.038 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.039 [2024-10-30 10:40:23.357960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:02.039 [2024-10-30 10:40:23.358052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.039 [2024-10-30 10:40:23.358083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:02.039 [2024-10-30 10:40:23.358101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.039 [2024-10-30 10:40:23.360947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.039 [2024-10-30 10:40:23.361016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:02.039 [2024-10-30 10:40:23.361126] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:02.039 [2024-10-30 10:40:23.361207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:02.039 pt1 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.039 "name": "raid_bdev1", 00:12:02.039 "uuid": "2ea46071-877f-442c-ab0d-5cce56996865", 00:12:02.039 "strip_size_kb": 64, 00:12:02.039 "state": "configuring", 00:12:02.039 "raid_level": "concat", 00:12:02.039 "superblock": true, 00:12:02.039 "num_base_bdevs": 2, 00:12:02.039 "num_base_bdevs_discovered": 1, 00:12:02.039 "num_base_bdevs_operational": 2, 00:12:02.039 "base_bdevs_list": [ 00:12:02.039 { 00:12:02.039 "name": "pt1", 00:12:02.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.039 "is_configured": true, 00:12:02.039 "data_offset": 2048, 00:12:02.039 "data_size": 63488 00:12:02.039 }, 00:12:02.039 { 00:12:02.039 "name": null, 00:12:02.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.039 "is_configured": false, 00:12:02.039 "data_offset": 2048, 00:12:02.039 "data_size": 63488 00:12:02.039 } 00:12:02.039 ] 00:12:02.039 }' 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.039 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.606 [2024-10-30 10:40:23.878132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:02.606 [2024-10-30 10:40:23.878218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.606 [2024-10-30 10:40:23.878250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:02.606 [2024-10-30 10:40:23.878268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.606 [2024-10-30 10:40:23.878866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.606 [2024-10-30 10:40:23.878907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:02.606 [2024-10-30 10:40:23.879026] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:02.606 [2024-10-30 10:40:23.879065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:02.606 [2024-10-30 10:40:23.879222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:02.606 [2024-10-30 10:40:23.879251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:02.606 [2024-10-30 10:40:23.879558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:02.606 [2024-10-30 10:40:23.879752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:02.606 [2024-10-30 10:40:23.879777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:02.606 [2024-10-30 10:40:23.879948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.606 pt2 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.606 "name": "raid_bdev1", 00:12:02.606 "uuid": "2ea46071-877f-442c-ab0d-5cce56996865", 00:12:02.606 "strip_size_kb": 64, 00:12:02.606 "state": "online", 00:12:02.606 "raid_level": "concat", 00:12:02.606 "superblock": true, 00:12:02.606 "num_base_bdevs": 2, 00:12:02.606 "num_base_bdevs_discovered": 2, 00:12:02.606 "num_base_bdevs_operational": 2, 00:12:02.606 "base_bdevs_list": [ 00:12:02.606 { 00:12:02.606 "name": "pt1", 00:12:02.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.606 "is_configured": true, 00:12:02.606 "data_offset": 2048, 00:12:02.606 "data_size": 63488 00:12:02.606 }, 00:12:02.606 { 00:12:02.606 "name": "pt2", 00:12:02.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.606 "is_configured": true, 00:12:02.606 "data_offset": 2048, 00:12:02.606 "data_size": 63488 00:12:02.606 } 00:12:02.606 ] 00:12:02.606 }' 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.606 10:40:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.185 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:03.185 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:03.185 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.185 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.185 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.185 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.186 [2024-10-30 10:40:24.402573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.186 "name": "raid_bdev1", 00:12:03.186 "aliases": [ 00:12:03.186 "2ea46071-877f-442c-ab0d-5cce56996865" 00:12:03.186 ], 00:12:03.186 "product_name": "Raid Volume", 00:12:03.186 "block_size": 512, 00:12:03.186 "num_blocks": 126976, 00:12:03.186 "uuid": "2ea46071-877f-442c-ab0d-5cce56996865", 00:12:03.186 "assigned_rate_limits": { 00:12:03.186 "rw_ios_per_sec": 0, 00:12:03.186 "rw_mbytes_per_sec": 0, 00:12:03.186 "r_mbytes_per_sec": 0, 00:12:03.186 "w_mbytes_per_sec": 0 00:12:03.186 }, 00:12:03.186 "claimed": false, 00:12:03.186 "zoned": false, 00:12:03.186 "supported_io_types": { 00:12:03.186 "read": true, 00:12:03.186 "write": true, 00:12:03.186 "unmap": true, 00:12:03.186 "flush": true, 00:12:03.186 "reset": true, 00:12:03.186 "nvme_admin": false, 00:12:03.186 "nvme_io": false, 00:12:03.186 "nvme_io_md": false, 00:12:03.186 "write_zeroes": true, 00:12:03.186 "zcopy": false, 00:12:03.186 "get_zone_info": false, 00:12:03.186 "zone_management": false, 00:12:03.186 "zone_append": false, 00:12:03.186 "compare": false, 00:12:03.186 "compare_and_write": false, 00:12:03.186 "abort": false, 00:12:03.186 "seek_hole": false, 00:12:03.186 "seek_data": false, 00:12:03.186 "copy": false, 00:12:03.186 "nvme_iov_md": false 00:12:03.186 }, 00:12:03.186 "memory_domains": [ 00:12:03.186 { 00:12:03.186 "dma_device_id": "system", 00:12:03.186 "dma_device_type": 1 00:12:03.186 }, 00:12:03.186 { 00:12:03.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.186 "dma_device_type": 2 00:12:03.186 }, 00:12:03.186 { 00:12:03.186 "dma_device_id": "system", 00:12:03.186 "dma_device_type": 1 00:12:03.186 }, 00:12:03.186 { 00:12:03.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.186 "dma_device_type": 2 00:12:03.186 } 00:12:03.186 ], 00:12:03.186 "driver_specific": { 00:12:03.186 "raid": { 00:12:03.186 "uuid": "2ea46071-877f-442c-ab0d-5cce56996865", 00:12:03.186 "strip_size_kb": 64, 00:12:03.186 "state": "online", 00:12:03.186 "raid_level": "concat", 00:12:03.186 "superblock": true, 00:12:03.186 "num_base_bdevs": 2, 00:12:03.186 "num_base_bdevs_discovered": 2, 00:12:03.186 "num_base_bdevs_operational": 2, 00:12:03.186 "base_bdevs_list": [ 00:12:03.186 { 00:12:03.186 "name": "pt1", 00:12:03.186 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:03.186 "is_configured": true, 00:12:03.186 "data_offset": 2048, 00:12:03.186 "data_size": 63488 00:12:03.186 }, 00:12:03.186 { 00:12:03.186 "name": "pt2", 00:12:03.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.186 "is_configured": true, 00:12:03.186 "data_offset": 2048, 00:12:03.186 "data_size": 63488 00:12:03.186 } 00:12:03.186 ] 00:12:03.186 } 00:12:03.186 } 00:12:03.186 }' 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:03.186 pt2' 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.186 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.444 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.444 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.444 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:03.444 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.444 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.444 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.444 [2024-10-30 10:40:24.666654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.444 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2ea46071-877f-442c-ab0d-5cce56996865 '!=' 2ea46071-877f-442c-ab0d-5cce56996865 ']' 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62350 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62350 ']' 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62350 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62350 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:03.445 killing process with pid 62350 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62350' 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62350 00:12:03.445 [2024-10-30 10:40:24.741844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.445 10:40:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62350 00:12:03.445 [2024-10-30 10:40:24.741986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.445 [2024-10-30 10:40:24.742058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.445 [2024-10-30 10:40:24.742078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:03.702 [2024-10-30 10:40:24.929532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.638 10:40:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:04.639 00:12:04.639 real 0m4.951s 00:12:04.639 user 0m7.324s 00:12:04.639 sys 0m0.710s 00:12:04.639 10:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.639 10:40:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.639 ************************************ 00:12:04.639 END TEST raid_superblock_test 00:12:04.639 ************************************ 00:12:04.639 10:40:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:12:04.639 10:40:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:04.639 10:40:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.639 10:40:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.639 ************************************ 00:12:04.639 START TEST raid_read_error_test 00:12:04.639 ************************************ 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.k45eTgTmEJ 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62566 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62566 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62566 ']' 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:04.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:04.639 10:40:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.898 [2024-10-30 10:40:26.158111] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:12:04.898 [2024-10-30 10:40:26.158306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62566 ] 00:12:04.898 [2024-10-30 10:40:26.338840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.156 [2024-10-30 10:40:26.470437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.415 [2024-10-30 10:40:26.676607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.415 [2024-10-30 10:40:26.676689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.674 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:05.674 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:05.674 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.674 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.674 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.674 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.933 BaseBdev1_malloc 00:12:05.933 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.933 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.934 true 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.934 [2024-10-30 10:40:27.175314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:05.934 [2024-10-30 10:40:27.175395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.934 [2024-10-30 10:40:27.175426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:05.934 [2024-10-30 10:40:27.175443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.934 [2024-10-30 10:40:27.178540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.934 [2024-10-30 10:40:27.178606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.934 BaseBdev1 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.934 BaseBdev2_malloc 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.934 true 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.934 [2024-10-30 10:40:27.235469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:05.934 [2024-10-30 10:40:27.235537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.934 [2024-10-30 10:40:27.235561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:05.934 [2024-10-30 10:40:27.235578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.934 [2024-10-30 10:40:27.238367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.934 [2024-10-30 10:40:27.238415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.934 BaseBdev2 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.934 [2024-10-30 10:40:27.243552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.934 [2024-10-30 10:40:27.245918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.934 [2024-10-30 10:40:27.246194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.934 [2024-10-30 10:40:27.246228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:05.934 [2024-10-30 10:40:27.246523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:05.934 [2024-10-30 10:40:27.246759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.934 [2024-10-30 10:40:27.246788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:05.934 [2024-10-30 10:40:27.247004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.934 "name": "raid_bdev1", 00:12:05.934 "uuid": "d0f4158a-86c5-405d-b9b9-67ce2560408e", 00:12:05.934 "strip_size_kb": 64, 00:12:05.934 "state": "online", 00:12:05.934 "raid_level": "concat", 00:12:05.934 "superblock": true, 00:12:05.934 "num_base_bdevs": 2, 00:12:05.934 "num_base_bdevs_discovered": 2, 00:12:05.934 "num_base_bdevs_operational": 2, 00:12:05.934 "base_bdevs_list": [ 00:12:05.934 { 00:12:05.934 "name": "BaseBdev1", 00:12:05.934 "uuid": "1755cea3-6c25-5398-9785-6d93451e5474", 00:12:05.934 "is_configured": true, 00:12:05.934 "data_offset": 2048, 00:12:05.934 "data_size": 63488 00:12:05.934 }, 00:12:05.934 { 00:12:05.934 "name": "BaseBdev2", 00:12:05.934 "uuid": "382c3757-6abc-56ba-9338-aaba5361789f", 00:12:05.934 "is_configured": true, 00:12:05.934 "data_offset": 2048, 00:12:05.934 "data_size": 63488 00:12:05.934 } 00:12:05.934 ] 00:12:05.934 }' 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.934 10:40:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.502 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:06.502 10:40:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.502 [2024-10-30 10:40:27.881107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.438 "name": "raid_bdev1", 00:12:07.438 "uuid": "d0f4158a-86c5-405d-b9b9-67ce2560408e", 00:12:07.438 "strip_size_kb": 64, 00:12:07.438 "state": "online", 00:12:07.438 "raid_level": "concat", 00:12:07.438 "superblock": true, 00:12:07.438 "num_base_bdevs": 2, 00:12:07.438 "num_base_bdevs_discovered": 2, 00:12:07.438 "num_base_bdevs_operational": 2, 00:12:07.438 "base_bdevs_list": [ 00:12:07.438 { 00:12:07.438 "name": "BaseBdev1", 00:12:07.438 "uuid": "1755cea3-6c25-5398-9785-6d93451e5474", 00:12:07.438 "is_configured": true, 00:12:07.438 "data_offset": 2048, 00:12:07.438 "data_size": 63488 00:12:07.438 }, 00:12:07.438 { 00:12:07.438 "name": "BaseBdev2", 00:12:07.438 "uuid": "382c3757-6abc-56ba-9338-aaba5361789f", 00:12:07.438 "is_configured": true, 00:12:07.438 "data_offset": 2048, 00:12:07.438 "data_size": 63488 00:12:07.438 } 00:12:07.438 ] 00:12:07.438 }' 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.438 10:40:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.004 [2024-10-30 10:40:29.283847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.004 [2024-10-30 10:40:29.283908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.004 [2024-10-30 10:40:29.287302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.004 [2024-10-30 10:40:29.287373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.004 [2024-10-30 10:40:29.287429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.004 [2024-10-30 10:40:29.287451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:08.004 { 00:12:08.004 "results": [ 00:12:08.004 { 00:12:08.004 "job": "raid_bdev1", 00:12:08.004 "core_mask": "0x1", 00:12:08.004 "workload": "randrw", 00:12:08.004 "percentage": 50, 00:12:08.004 "status": "finished", 00:12:08.004 "queue_depth": 1, 00:12:08.004 "io_size": 131072, 00:12:08.004 "runtime": 1.400198, 00:12:08.004 "iops": 10753.479150805815, 00:12:08.004 "mibps": 1344.184893850727, 00:12:08.004 "io_failed": 1, 00:12:08.004 "io_timeout": 0, 00:12:08.004 "avg_latency_us": 129.61406826935848, 00:12:08.004 "min_latency_us": 39.56363636363636, 00:12:08.004 "max_latency_us": 2293.76 00:12:08.004 } 00:12:08.004 ], 00:12:08.004 "core_count": 1 00:12:08.004 } 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62566 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62566 ']' 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62566 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62566 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:08.004 killing process with pid 62566 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62566' 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62566 00:12:08.004 [2024-10-30 10:40:29.324372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.004 10:40:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62566 00:12:08.004 [2024-10-30 10:40:29.437847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.k45eTgTmEJ 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:09.387 00:12:09.387 real 0m4.537s 00:12:09.387 user 0m5.641s 00:12:09.387 sys 0m0.559s 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:09.387 ************************************ 00:12:09.387 10:40:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.387 END TEST raid_read_error_test 00:12:09.387 ************************************ 00:12:09.387 10:40:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:12:09.387 10:40:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:09.387 10:40:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:09.387 10:40:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.387 ************************************ 00:12:09.387 START TEST raid_write_error_test 00:12:09.387 ************************************ 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Pa6du9JVut 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62713 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62713 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62713 ']' 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:09.387 10:40:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.387 [2024-10-30 10:40:30.720859] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:12:09.387 [2024-10-30 10:40:30.721059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62713 ] 00:12:09.646 [2024-10-30 10:40:30.893333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.646 [2024-10-30 10:40:31.020726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.905 [2024-10-30 10:40:31.240837] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.905 [2024-10-30 10:40:31.240915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.474 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 BaseBdev1_malloc 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 true 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 [2024-10-30 10:40:31.783342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:10.475 [2024-10-30 10:40:31.783412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.475 [2024-10-30 10:40:31.783440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:10.475 [2024-10-30 10:40:31.783461] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.475 [2024-10-30 10:40:31.786234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.475 [2024-10-30 10:40:31.786282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:10.475 BaseBdev1 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 BaseBdev2_malloc 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 true 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 [2024-10-30 10:40:31.846652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:10.475 [2024-10-30 10:40:31.846723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.475 [2024-10-30 10:40:31.846749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:10.475 [2024-10-30 10:40:31.846767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.475 [2024-10-30 10:40:31.849495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.475 [2024-10-30 10:40:31.849544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:10.475 BaseBdev2 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 [2024-10-30 10:40:31.858732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.475 [2024-10-30 10:40:31.861164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.475 [2024-10-30 10:40:31.861416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:10.475 [2024-10-30 10:40:31.861456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:10.475 [2024-10-30 10:40:31.861742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:10.475 [2024-10-30 10:40:31.861995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:10.475 [2024-10-30 10:40:31.862023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:10.475 [2024-10-30 10:40:31.862209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.475 "name": "raid_bdev1", 00:12:10.475 "uuid": "333b8c4f-1db8-48b7-9ecf-595729d255eb", 00:12:10.475 "strip_size_kb": 64, 00:12:10.475 "state": "online", 00:12:10.475 "raid_level": "concat", 00:12:10.475 "superblock": true, 00:12:10.475 "num_base_bdevs": 2, 00:12:10.475 "num_base_bdevs_discovered": 2, 00:12:10.475 "num_base_bdevs_operational": 2, 00:12:10.475 "base_bdevs_list": [ 00:12:10.475 { 00:12:10.475 "name": "BaseBdev1", 00:12:10.475 "uuid": "fc302a6d-4ae8-5a06-9708-5b77feb8a406", 00:12:10.475 "is_configured": true, 00:12:10.475 "data_offset": 2048, 00:12:10.475 "data_size": 63488 00:12:10.475 }, 00:12:10.475 { 00:12:10.475 "name": "BaseBdev2", 00:12:10.475 "uuid": "7d17cfeb-6357-5a16-a305-6c90c45af15c", 00:12:10.475 "is_configured": true, 00:12:10.475 "data_offset": 2048, 00:12:10.475 "data_size": 63488 00:12:10.475 } 00:12:10.475 ] 00:12:10.475 }' 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.475 10:40:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.042 10:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:11.042 10:40:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:11.302 [2024-10-30 10:40:32.524317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.237 "name": "raid_bdev1", 00:12:12.237 "uuid": "333b8c4f-1db8-48b7-9ecf-595729d255eb", 00:12:12.237 "strip_size_kb": 64, 00:12:12.237 "state": "online", 00:12:12.237 "raid_level": "concat", 00:12:12.237 "superblock": true, 00:12:12.237 "num_base_bdevs": 2, 00:12:12.237 "num_base_bdevs_discovered": 2, 00:12:12.237 "num_base_bdevs_operational": 2, 00:12:12.237 "base_bdevs_list": [ 00:12:12.237 { 00:12:12.237 "name": "BaseBdev1", 00:12:12.237 "uuid": "fc302a6d-4ae8-5a06-9708-5b77feb8a406", 00:12:12.237 "is_configured": true, 00:12:12.237 "data_offset": 2048, 00:12:12.237 "data_size": 63488 00:12:12.237 }, 00:12:12.237 { 00:12:12.237 "name": "BaseBdev2", 00:12:12.237 "uuid": "7d17cfeb-6357-5a16-a305-6c90c45af15c", 00:12:12.237 "is_configured": true, 00:12:12.237 "data_offset": 2048, 00:12:12.237 "data_size": 63488 00:12:12.237 } 00:12:12.237 ] 00:12:12.237 }' 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.237 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.496 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:12.496 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.496 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.496 [2024-10-30 10:40:33.901946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.496 [2024-10-30 10:40:33.902004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.496 [2024-10-30 10:40:33.905361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.497 [2024-10-30 10:40:33.905426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.497 [2024-10-30 10:40:33.905470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.497 [2024-10-30 10:40:33.905492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:12.497 { 00:12:12.497 "results": [ 00:12:12.497 { 00:12:12.497 "job": "raid_bdev1", 00:12:12.497 "core_mask": "0x1", 00:12:12.497 "workload": "randrw", 00:12:12.497 "percentage": 50, 00:12:12.497 "status": "finished", 00:12:12.497 "queue_depth": 1, 00:12:12.497 "io_size": 131072, 00:12:12.497 "runtime": 1.375266, 00:12:12.497 "iops": 11140.390295404672, 00:12:12.497 "mibps": 1392.548786925584, 00:12:12.497 "io_failed": 1, 00:12:12.497 "io_timeout": 0, 00:12:12.497 "avg_latency_us": 125.03091977073963, 00:12:12.497 "min_latency_us": 41.658181818181816, 00:12:12.497 "max_latency_us": 1854.370909090909 00:12:12.497 } 00:12:12.497 ], 00:12:12.497 "core_count": 1 00:12:12.497 } 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62713 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62713 ']' 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62713 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62713 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:12.497 killing process with pid 62713 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62713' 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62713 00:12:12.497 [2024-10-30 10:40:33.941201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.497 10:40:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62713 00:12:12.755 [2024-10-30 10:40:34.063732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.694 10:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Pa6du9JVut 00:12:13.694 10:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:13.694 10:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:13.694 10:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:13.694 10:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:13.694 10:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:13.694 10:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:13.694 10:40:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:13.694 00:12:13.694 real 0m4.543s 00:12:13.694 user 0m5.740s 00:12:13.694 sys 0m0.549s 00:12:13.954 10:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.954 10:40:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.954 ************************************ 00:12:13.954 END TEST raid_write_error_test 00:12:13.954 ************************************ 00:12:13.954 10:40:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:13.954 10:40:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:12:13.954 10:40:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:13.954 10:40:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.954 10:40:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.954 ************************************ 00:12:13.954 START TEST raid_state_function_test 00:12:13.954 ************************************ 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:13.954 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62851 00:12:13.955 Process raid pid: 62851 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62851' 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62851 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62851 ']' 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:13.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.955 10:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:13.955 [2024-10-30 10:40:35.311486] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:12:13.955 [2024-10-30 10:40:35.311651] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.215 [2024-10-30 10:40:35.484177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.215 [2024-10-30 10:40:35.639732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.474 [2024-10-30 10:40:35.849183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.474 [2024-10-30 10:40:35.849240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.040 [2024-10-30 10:40:36.261114] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.040 [2024-10-30 10:40:36.261176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.040 [2024-10-30 10:40:36.261193] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.040 [2024-10-30 10:40:36.261211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.040 "name": "Existed_Raid", 00:12:15.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.040 "strip_size_kb": 0, 00:12:15.040 "state": "configuring", 00:12:15.040 "raid_level": "raid1", 00:12:15.040 "superblock": false, 00:12:15.040 "num_base_bdevs": 2, 00:12:15.040 "num_base_bdevs_discovered": 0, 00:12:15.040 "num_base_bdevs_operational": 2, 00:12:15.040 "base_bdevs_list": [ 00:12:15.040 { 00:12:15.040 "name": "BaseBdev1", 00:12:15.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.040 "is_configured": false, 00:12:15.040 "data_offset": 0, 00:12:15.040 "data_size": 0 00:12:15.040 }, 00:12:15.040 { 00:12:15.040 "name": "BaseBdev2", 00:12:15.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.040 "is_configured": false, 00:12:15.040 "data_offset": 0, 00:12:15.040 "data_size": 0 00:12:15.040 } 00:12:15.040 ] 00:12:15.040 }' 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.040 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.298 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:15.298 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.298 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.557 [2024-10-30 10:40:36.769203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.557 [2024-10-30 10:40:36.769259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.557 [2024-10-30 10:40:36.777190] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.557 [2024-10-30 10:40:36.777244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.557 [2024-10-30 10:40:36.777261] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.557 [2024-10-30 10:40:36.777280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.557 [2024-10-30 10:40:36.821896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.557 BaseBdev1 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.557 [ 00:12:15.557 { 00:12:15.557 "name": "BaseBdev1", 00:12:15.557 "aliases": [ 00:12:15.557 "c9ea06d4-5326-4920-81da-60a117fa40e9" 00:12:15.557 ], 00:12:15.557 "product_name": "Malloc disk", 00:12:15.557 "block_size": 512, 00:12:15.557 "num_blocks": 65536, 00:12:15.557 "uuid": "c9ea06d4-5326-4920-81da-60a117fa40e9", 00:12:15.557 "assigned_rate_limits": { 00:12:15.557 "rw_ios_per_sec": 0, 00:12:15.557 "rw_mbytes_per_sec": 0, 00:12:15.557 "r_mbytes_per_sec": 0, 00:12:15.557 "w_mbytes_per_sec": 0 00:12:15.557 }, 00:12:15.557 "claimed": true, 00:12:15.557 "claim_type": "exclusive_write", 00:12:15.557 "zoned": false, 00:12:15.557 "supported_io_types": { 00:12:15.557 "read": true, 00:12:15.557 "write": true, 00:12:15.557 "unmap": true, 00:12:15.557 "flush": true, 00:12:15.557 "reset": true, 00:12:15.557 "nvme_admin": false, 00:12:15.557 "nvme_io": false, 00:12:15.557 "nvme_io_md": false, 00:12:15.557 "write_zeroes": true, 00:12:15.557 "zcopy": true, 00:12:15.557 "get_zone_info": false, 00:12:15.557 "zone_management": false, 00:12:15.557 "zone_append": false, 00:12:15.557 "compare": false, 00:12:15.557 "compare_and_write": false, 00:12:15.557 "abort": true, 00:12:15.557 "seek_hole": false, 00:12:15.557 "seek_data": false, 00:12:15.557 "copy": true, 00:12:15.557 "nvme_iov_md": false 00:12:15.557 }, 00:12:15.557 "memory_domains": [ 00:12:15.557 { 00:12:15.557 "dma_device_id": "system", 00:12:15.557 "dma_device_type": 1 00:12:15.557 }, 00:12:15.557 { 00:12:15.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.557 "dma_device_type": 2 00:12:15.557 } 00:12:15.557 ], 00:12:15.557 "driver_specific": {} 00:12:15.557 } 00:12:15.557 ] 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.557 "name": "Existed_Raid", 00:12:15.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.557 "strip_size_kb": 0, 00:12:15.557 "state": "configuring", 00:12:15.557 "raid_level": "raid1", 00:12:15.557 "superblock": false, 00:12:15.557 "num_base_bdevs": 2, 00:12:15.557 "num_base_bdevs_discovered": 1, 00:12:15.557 "num_base_bdevs_operational": 2, 00:12:15.557 "base_bdevs_list": [ 00:12:15.557 { 00:12:15.557 "name": "BaseBdev1", 00:12:15.557 "uuid": "c9ea06d4-5326-4920-81da-60a117fa40e9", 00:12:15.557 "is_configured": true, 00:12:15.557 "data_offset": 0, 00:12:15.557 "data_size": 65536 00:12:15.557 }, 00:12:15.557 { 00:12:15.557 "name": "BaseBdev2", 00:12:15.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.557 "is_configured": false, 00:12:15.557 "data_offset": 0, 00:12:15.557 "data_size": 0 00:12:15.557 } 00:12:15.557 ] 00:12:15.557 }' 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.557 10:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.124 [2024-10-30 10:40:37.366103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.124 [2024-10-30 10:40:37.366170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.124 [2024-10-30 10:40:37.374126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.124 [2024-10-30 10:40:37.376560] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.124 [2024-10-30 10:40:37.376613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.124 "name": "Existed_Raid", 00:12:16.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.124 "strip_size_kb": 0, 00:12:16.124 "state": "configuring", 00:12:16.124 "raid_level": "raid1", 00:12:16.124 "superblock": false, 00:12:16.124 "num_base_bdevs": 2, 00:12:16.124 "num_base_bdevs_discovered": 1, 00:12:16.124 "num_base_bdevs_operational": 2, 00:12:16.124 "base_bdevs_list": [ 00:12:16.124 { 00:12:16.124 "name": "BaseBdev1", 00:12:16.124 "uuid": "c9ea06d4-5326-4920-81da-60a117fa40e9", 00:12:16.124 "is_configured": true, 00:12:16.124 "data_offset": 0, 00:12:16.124 "data_size": 65536 00:12:16.124 }, 00:12:16.124 { 00:12:16.124 "name": "BaseBdev2", 00:12:16.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.124 "is_configured": false, 00:12:16.124 "data_offset": 0, 00:12:16.124 "data_size": 0 00:12:16.124 } 00:12:16.124 ] 00:12:16.124 }' 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.124 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.691 [2024-10-30 10:40:37.928205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.691 [2024-10-30 10:40:37.928276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:16.691 [2024-10-30 10:40:37.928289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:16.691 [2024-10-30 10:40:37.928624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:16.691 [2024-10-30 10:40:37.928849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:16.691 [2024-10-30 10:40:37.928882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:16.691 [2024-10-30 10:40:37.929208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.691 BaseBdev2 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.691 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.691 [ 00:12:16.691 { 00:12:16.691 "name": "BaseBdev2", 00:12:16.691 "aliases": [ 00:12:16.691 "cf3a46e7-958c-4a83-83b0-e8a98af5419a" 00:12:16.691 ], 00:12:16.691 "product_name": "Malloc disk", 00:12:16.691 "block_size": 512, 00:12:16.691 "num_blocks": 65536, 00:12:16.691 "uuid": "cf3a46e7-958c-4a83-83b0-e8a98af5419a", 00:12:16.691 "assigned_rate_limits": { 00:12:16.691 "rw_ios_per_sec": 0, 00:12:16.692 "rw_mbytes_per_sec": 0, 00:12:16.692 "r_mbytes_per_sec": 0, 00:12:16.692 "w_mbytes_per_sec": 0 00:12:16.692 }, 00:12:16.692 "claimed": true, 00:12:16.692 "claim_type": "exclusive_write", 00:12:16.692 "zoned": false, 00:12:16.692 "supported_io_types": { 00:12:16.692 "read": true, 00:12:16.692 "write": true, 00:12:16.692 "unmap": true, 00:12:16.692 "flush": true, 00:12:16.692 "reset": true, 00:12:16.692 "nvme_admin": false, 00:12:16.692 "nvme_io": false, 00:12:16.692 "nvme_io_md": false, 00:12:16.692 "write_zeroes": true, 00:12:16.692 "zcopy": true, 00:12:16.692 "get_zone_info": false, 00:12:16.692 "zone_management": false, 00:12:16.692 "zone_append": false, 00:12:16.692 "compare": false, 00:12:16.692 "compare_and_write": false, 00:12:16.692 "abort": true, 00:12:16.692 "seek_hole": false, 00:12:16.692 "seek_data": false, 00:12:16.692 "copy": true, 00:12:16.692 "nvme_iov_md": false 00:12:16.692 }, 00:12:16.692 "memory_domains": [ 00:12:16.692 { 00:12:16.692 "dma_device_id": "system", 00:12:16.692 "dma_device_type": 1 00:12:16.692 }, 00:12:16.692 { 00:12:16.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.692 "dma_device_type": 2 00:12:16.692 } 00:12:16.692 ], 00:12:16.692 "driver_specific": {} 00:12:16.692 } 00:12:16.692 ] 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.692 10:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.692 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.692 "name": "Existed_Raid", 00:12:16.692 "uuid": "05c7f12b-3682-444e-be9f-ad1f536692ed", 00:12:16.692 "strip_size_kb": 0, 00:12:16.692 "state": "online", 00:12:16.692 "raid_level": "raid1", 00:12:16.692 "superblock": false, 00:12:16.692 "num_base_bdevs": 2, 00:12:16.692 "num_base_bdevs_discovered": 2, 00:12:16.692 "num_base_bdevs_operational": 2, 00:12:16.692 "base_bdevs_list": [ 00:12:16.692 { 00:12:16.692 "name": "BaseBdev1", 00:12:16.692 "uuid": "c9ea06d4-5326-4920-81da-60a117fa40e9", 00:12:16.692 "is_configured": true, 00:12:16.692 "data_offset": 0, 00:12:16.692 "data_size": 65536 00:12:16.692 }, 00:12:16.692 { 00:12:16.692 "name": "BaseBdev2", 00:12:16.692 "uuid": "cf3a46e7-958c-4a83-83b0-e8a98af5419a", 00:12:16.692 "is_configured": true, 00:12:16.692 "data_offset": 0, 00:12:16.692 "data_size": 65536 00:12:16.692 } 00:12:16.692 ] 00:12:16.692 }' 00:12:16.692 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.692 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.305 [2024-10-30 10:40:38.476747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.305 "name": "Existed_Raid", 00:12:17.305 "aliases": [ 00:12:17.305 "05c7f12b-3682-444e-be9f-ad1f536692ed" 00:12:17.305 ], 00:12:17.305 "product_name": "Raid Volume", 00:12:17.305 "block_size": 512, 00:12:17.305 "num_blocks": 65536, 00:12:17.305 "uuid": "05c7f12b-3682-444e-be9f-ad1f536692ed", 00:12:17.305 "assigned_rate_limits": { 00:12:17.305 "rw_ios_per_sec": 0, 00:12:17.305 "rw_mbytes_per_sec": 0, 00:12:17.305 "r_mbytes_per_sec": 0, 00:12:17.305 "w_mbytes_per_sec": 0 00:12:17.305 }, 00:12:17.305 "claimed": false, 00:12:17.305 "zoned": false, 00:12:17.305 "supported_io_types": { 00:12:17.305 "read": true, 00:12:17.305 "write": true, 00:12:17.305 "unmap": false, 00:12:17.305 "flush": false, 00:12:17.305 "reset": true, 00:12:17.305 "nvme_admin": false, 00:12:17.305 "nvme_io": false, 00:12:17.305 "nvme_io_md": false, 00:12:17.305 "write_zeroes": true, 00:12:17.305 "zcopy": false, 00:12:17.305 "get_zone_info": false, 00:12:17.305 "zone_management": false, 00:12:17.305 "zone_append": false, 00:12:17.305 "compare": false, 00:12:17.305 "compare_and_write": false, 00:12:17.305 "abort": false, 00:12:17.305 "seek_hole": false, 00:12:17.305 "seek_data": false, 00:12:17.305 "copy": false, 00:12:17.305 "nvme_iov_md": false 00:12:17.305 }, 00:12:17.305 "memory_domains": [ 00:12:17.305 { 00:12:17.305 "dma_device_id": "system", 00:12:17.305 "dma_device_type": 1 00:12:17.305 }, 00:12:17.305 { 00:12:17.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.305 "dma_device_type": 2 00:12:17.305 }, 00:12:17.305 { 00:12:17.305 "dma_device_id": "system", 00:12:17.305 "dma_device_type": 1 00:12:17.305 }, 00:12:17.305 { 00:12:17.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.305 "dma_device_type": 2 00:12:17.305 } 00:12:17.305 ], 00:12:17.305 "driver_specific": { 00:12:17.305 "raid": { 00:12:17.305 "uuid": "05c7f12b-3682-444e-be9f-ad1f536692ed", 00:12:17.305 "strip_size_kb": 0, 00:12:17.305 "state": "online", 00:12:17.305 "raid_level": "raid1", 00:12:17.305 "superblock": false, 00:12:17.305 "num_base_bdevs": 2, 00:12:17.305 "num_base_bdevs_discovered": 2, 00:12:17.305 "num_base_bdevs_operational": 2, 00:12:17.305 "base_bdevs_list": [ 00:12:17.305 { 00:12:17.305 "name": "BaseBdev1", 00:12:17.305 "uuid": "c9ea06d4-5326-4920-81da-60a117fa40e9", 00:12:17.305 "is_configured": true, 00:12:17.305 "data_offset": 0, 00:12:17.305 "data_size": 65536 00:12:17.305 }, 00:12:17.305 { 00:12:17.305 "name": "BaseBdev2", 00:12:17.305 "uuid": "cf3a46e7-958c-4a83-83b0-e8a98af5419a", 00:12:17.305 "is_configured": true, 00:12:17.305 "data_offset": 0, 00:12:17.305 "data_size": 65536 00:12:17.305 } 00:12:17.305 ] 00:12:17.305 } 00:12:17.305 } 00:12:17.305 }' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:17.305 BaseBdev2' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.305 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.305 [2024-10-30 10:40:38.744541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:17.562 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.563 "name": "Existed_Raid", 00:12:17.563 "uuid": "05c7f12b-3682-444e-be9f-ad1f536692ed", 00:12:17.563 "strip_size_kb": 0, 00:12:17.563 "state": "online", 00:12:17.563 "raid_level": "raid1", 00:12:17.563 "superblock": false, 00:12:17.563 "num_base_bdevs": 2, 00:12:17.563 "num_base_bdevs_discovered": 1, 00:12:17.563 "num_base_bdevs_operational": 1, 00:12:17.563 "base_bdevs_list": [ 00:12:17.563 { 00:12:17.563 "name": null, 00:12:17.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.563 "is_configured": false, 00:12:17.563 "data_offset": 0, 00:12:17.563 "data_size": 65536 00:12:17.563 }, 00:12:17.563 { 00:12:17.563 "name": "BaseBdev2", 00:12:17.563 "uuid": "cf3a46e7-958c-4a83-83b0-e8a98af5419a", 00:12:17.563 "is_configured": true, 00:12:17.563 "data_offset": 0, 00:12:17.563 "data_size": 65536 00:12:17.563 } 00:12:17.563 ] 00:12:17.563 }' 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.563 10:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.130 [2024-10-30 10:40:39.380039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.130 [2024-10-30 10:40:39.380162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.130 [2024-10-30 10:40:39.471577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.130 [2024-10-30 10:40:39.471655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.130 [2024-10-30 10:40:39.471676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62851 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62851 ']' 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62851 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62851 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:18.130 killing process with pid 62851 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62851' 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62851 00:12:18.130 [2024-10-30 10:40:39.560015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.130 10:40:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62851 00:12:18.130 [2024-10-30 10:40:39.574725] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:19.505 00:12:19.505 real 0m5.403s 00:12:19.505 user 0m8.135s 00:12:19.505 sys 0m0.782s 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.505 ************************************ 00:12:19.505 END TEST raid_state_function_test 00:12:19.505 ************************************ 00:12:19.505 10:40:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:12:19.505 10:40:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:19.505 10:40:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:19.505 10:40:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.505 ************************************ 00:12:19.505 START TEST raid_state_function_test_sb 00:12:19.505 ************************************ 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.505 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63104 00:12:19.506 Process raid pid: 63104 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63104' 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63104 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63104 ']' 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:19.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:19.506 10:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.506 [2024-10-30 10:40:40.793580] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:12:19.506 [2024-10-30 10:40:40.793790] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.765 [2024-10-30 10:40:40.974534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.765 [2024-10-30 10:40:41.106927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.025 [2024-10-30 10:40:41.312272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.025 [2024-10-30 10:40:41.312333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.284 10:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:20.284 10:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:20.284 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:20.284 10:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.284 10:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.543 [2024-10-30 10:40:41.753587] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.543 [2024-10-30 10:40:41.753659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.543 [2024-10-30 10:40:41.753675] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.543 [2024-10-30 10:40:41.753692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.543 "name": "Existed_Raid", 00:12:20.543 "uuid": "633a8fed-284d-494e-a407-72f7cd8b8d87", 00:12:20.543 "strip_size_kb": 0, 00:12:20.543 "state": "configuring", 00:12:20.543 "raid_level": "raid1", 00:12:20.543 "superblock": true, 00:12:20.543 "num_base_bdevs": 2, 00:12:20.543 "num_base_bdevs_discovered": 0, 00:12:20.543 "num_base_bdevs_operational": 2, 00:12:20.543 "base_bdevs_list": [ 00:12:20.543 { 00:12:20.543 "name": "BaseBdev1", 00:12:20.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.543 "is_configured": false, 00:12:20.543 "data_offset": 0, 00:12:20.543 "data_size": 0 00:12:20.543 }, 00:12:20.543 { 00:12:20.543 "name": "BaseBdev2", 00:12:20.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.543 "is_configured": false, 00:12:20.543 "data_offset": 0, 00:12:20.543 "data_size": 0 00:12:20.543 } 00:12:20.543 ] 00:12:20.543 }' 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.543 10:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.802 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:20.802 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.802 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.802 [2024-10-30 10:40:42.265699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.802 [2024-10-30 10:40:42.265746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:20.802 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.802 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:20.802 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.802 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.061 [2024-10-30 10:40:42.273664] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.061 [2024-10-30 10:40:42.273730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.061 [2024-10-30 10:40:42.273744] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.061 [2024-10-30 10:40:42.273762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.061 [2024-10-30 10:40:42.318689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.061 BaseBdev1 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.061 [ 00:12:21.061 { 00:12:21.061 "name": "BaseBdev1", 00:12:21.061 "aliases": [ 00:12:21.061 "22f6bce0-ebe0-4562-bbe9-f1573c74611f" 00:12:21.061 ], 00:12:21.061 "product_name": "Malloc disk", 00:12:21.061 "block_size": 512, 00:12:21.061 "num_blocks": 65536, 00:12:21.061 "uuid": "22f6bce0-ebe0-4562-bbe9-f1573c74611f", 00:12:21.061 "assigned_rate_limits": { 00:12:21.061 "rw_ios_per_sec": 0, 00:12:21.061 "rw_mbytes_per_sec": 0, 00:12:21.061 "r_mbytes_per_sec": 0, 00:12:21.061 "w_mbytes_per_sec": 0 00:12:21.061 }, 00:12:21.061 "claimed": true, 00:12:21.061 "claim_type": "exclusive_write", 00:12:21.061 "zoned": false, 00:12:21.061 "supported_io_types": { 00:12:21.061 "read": true, 00:12:21.061 "write": true, 00:12:21.061 "unmap": true, 00:12:21.061 "flush": true, 00:12:21.061 "reset": true, 00:12:21.061 "nvme_admin": false, 00:12:21.061 "nvme_io": false, 00:12:21.061 "nvme_io_md": false, 00:12:21.061 "write_zeroes": true, 00:12:21.061 "zcopy": true, 00:12:21.061 "get_zone_info": false, 00:12:21.061 "zone_management": false, 00:12:21.061 "zone_append": false, 00:12:21.061 "compare": false, 00:12:21.061 "compare_and_write": false, 00:12:21.061 "abort": true, 00:12:21.061 "seek_hole": false, 00:12:21.061 "seek_data": false, 00:12:21.061 "copy": true, 00:12:21.061 "nvme_iov_md": false 00:12:21.061 }, 00:12:21.061 "memory_domains": [ 00:12:21.061 { 00:12:21.061 "dma_device_id": "system", 00:12:21.061 "dma_device_type": 1 00:12:21.061 }, 00:12:21.061 { 00:12:21.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.061 "dma_device_type": 2 00:12:21.061 } 00:12:21.061 ], 00:12:21.061 "driver_specific": {} 00:12:21.061 } 00:12:21.061 ] 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.061 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.062 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.062 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.062 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.062 "name": "Existed_Raid", 00:12:21.062 "uuid": "ce2ab9f6-f5a0-40c5-8a5a-cde65eb78d67", 00:12:21.062 "strip_size_kb": 0, 00:12:21.062 "state": "configuring", 00:12:21.062 "raid_level": "raid1", 00:12:21.062 "superblock": true, 00:12:21.062 "num_base_bdevs": 2, 00:12:21.062 "num_base_bdevs_discovered": 1, 00:12:21.062 "num_base_bdevs_operational": 2, 00:12:21.062 "base_bdevs_list": [ 00:12:21.062 { 00:12:21.062 "name": "BaseBdev1", 00:12:21.062 "uuid": "22f6bce0-ebe0-4562-bbe9-f1573c74611f", 00:12:21.062 "is_configured": true, 00:12:21.062 "data_offset": 2048, 00:12:21.062 "data_size": 63488 00:12:21.062 }, 00:12:21.062 { 00:12:21.062 "name": "BaseBdev2", 00:12:21.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.062 "is_configured": false, 00:12:21.062 "data_offset": 0, 00:12:21.062 "data_size": 0 00:12:21.062 } 00:12:21.062 ] 00:12:21.062 }' 00:12:21.062 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.062 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.630 [2024-10-30 10:40:42.862868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:21.630 [2024-10-30 10:40:42.862931] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.630 [2024-10-30 10:40:42.870916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:21.630 [2024-10-30 10:40:42.873386] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.630 [2024-10-30 10:40:42.873436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.630 "name": "Existed_Raid", 00:12:21.630 "uuid": "e9156a4b-8d44-46e1-b527-dfec18ca30b7", 00:12:21.630 "strip_size_kb": 0, 00:12:21.630 "state": "configuring", 00:12:21.630 "raid_level": "raid1", 00:12:21.630 "superblock": true, 00:12:21.630 "num_base_bdevs": 2, 00:12:21.630 "num_base_bdevs_discovered": 1, 00:12:21.630 "num_base_bdevs_operational": 2, 00:12:21.630 "base_bdevs_list": [ 00:12:21.630 { 00:12:21.630 "name": "BaseBdev1", 00:12:21.630 "uuid": "22f6bce0-ebe0-4562-bbe9-f1573c74611f", 00:12:21.630 "is_configured": true, 00:12:21.630 "data_offset": 2048, 00:12:21.630 "data_size": 63488 00:12:21.630 }, 00:12:21.630 { 00:12:21.630 "name": "BaseBdev2", 00:12:21.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.630 "is_configured": false, 00:12:21.630 "data_offset": 0, 00:12:21.630 "data_size": 0 00:12:21.630 } 00:12:21.630 ] 00:12:21.630 }' 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.630 10:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.197 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:22.197 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.197 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.197 [2024-10-30 10:40:43.402572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.197 [2024-10-30 10:40:43.402934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:22.197 [2024-10-30 10:40:43.402954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.197 [2024-10-30 10:40:43.403416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:22.197 BaseBdev2 00:12:22.197 [2024-10-30 10:40:43.403637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:22.197 [2024-10-30 10:40:43.403664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:22.197 [2024-10-30 10:40:43.403833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.197 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.197 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:22.197 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:22.197 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:22.197 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:22.197 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.198 [ 00:12:22.198 { 00:12:22.198 "name": "BaseBdev2", 00:12:22.198 "aliases": [ 00:12:22.198 "afe7d360-3fa7-4712-a05a-1cf81e2e960e" 00:12:22.198 ], 00:12:22.198 "product_name": "Malloc disk", 00:12:22.198 "block_size": 512, 00:12:22.198 "num_blocks": 65536, 00:12:22.198 "uuid": "afe7d360-3fa7-4712-a05a-1cf81e2e960e", 00:12:22.198 "assigned_rate_limits": { 00:12:22.198 "rw_ios_per_sec": 0, 00:12:22.198 "rw_mbytes_per_sec": 0, 00:12:22.198 "r_mbytes_per_sec": 0, 00:12:22.198 "w_mbytes_per_sec": 0 00:12:22.198 }, 00:12:22.198 "claimed": true, 00:12:22.198 "claim_type": "exclusive_write", 00:12:22.198 "zoned": false, 00:12:22.198 "supported_io_types": { 00:12:22.198 "read": true, 00:12:22.198 "write": true, 00:12:22.198 "unmap": true, 00:12:22.198 "flush": true, 00:12:22.198 "reset": true, 00:12:22.198 "nvme_admin": false, 00:12:22.198 "nvme_io": false, 00:12:22.198 "nvme_io_md": false, 00:12:22.198 "write_zeroes": true, 00:12:22.198 "zcopy": true, 00:12:22.198 "get_zone_info": false, 00:12:22.198 "zone_management": false, 00:12:22.198 "zone_append": false, 00:12:22.198 "compare": false, 00:12:22.198 "compare_and_write": false, 00:12:22.198 "abort": true, 00:12:22.198 "seek_hole": false, 00:12:22.198 "seek_data": false, 00:12:22.198 "copy": true, 00:12:22.198 "nvme_iov_md": false 00:12:22.198 }, 00:12:22.198 "memory_domains": [ 00:12:22.198 { 00:12:22.198 "dma_device_id": "system", 00:12:22.198 "dma_device_type": 1 00:12:22.198 }, 00:12:22.198 { 00:12:22.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.198 "dma_device_type": 2 00:12:22.198 } 00:12:22.198 ], 00:12:22.198 "driver_specific": {} 00:12:22.198 } 00:12:22.198 ] 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.198 "name": "Existed_Raid", 00:12:22.198 "uuid": "e9156a4b-8d44-46e1-b527-dfec18ca30b7", 00:12:22.198 "strip_size_kb": 0, 00:12:22.198 "state": "online", 00:12:22.198 "raid_level": "raid1", 00:12:22.198 "superblock": true, 00:12:22.198 "num_base_bdevs": 2, 00:12:22.198 "num_base_bdevs_discovered": 2, 00:12:22.198 "num_base_bdevs_operational": 2, 00:12:22.198 "base_bdevs_list": [ 00:12:22.198 { 00:12:22.198 "name": "BaseBdev1", 00:12:22.198 "uuid": "22f6bce0-ebe0-4562-bbe9-f1573c74611f", 00:12:22.198 "is_configured": true, 00:12:22.198 "data_offset": 2048, 00:12:22.198 "data_size": 63488 00:12:22.198 }, 00:12:22.198 { 00:12:22.198 "name": "BaseBdev2", 00:12:22.198 "uuid": "afe7d360-3fa7-4712-a05a-1cf81e2e960e", 00:12:22.198 "is_configured": true, 00:12:22.198 "data_offset": 2048, 00:12:22.198 "data_size": 63488 00:12:22.198 } 00:12:22.198 ] 00:12:22.198 }' 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.198 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.767 [2024-10-30 10:40:43.955156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.767 "name": "Existed_Raid", 00:12:22.767 "aliases": [ 00:12:22.767 "e9156a4b-8d44-46e1-b527-dfec18ca30b7" 00:12:22.767 ], 00:12:22.767 "product_name": "Raid Volume", 00:12:22.767 "block_size": 512, 00:12:22.767 "num_blocks": 63488, 00:12:22.767 "uuid": "e9156a4b-8d44-46e1-b527-dfec18ca30b7", 00:12:22.767 "assigned_rate_limits": { 00:12:22.767 "rw_ios_per_sec": 0, 00:12:22.767 "rw_mbytes_per_sec": 0, 00:12:22.767 "r_mbytes_per_sec": 0, 00:12:22.767 "w_mbytes_per_sec": 0 00:12:22.767 }, 00:12:22.767 "claimed": false, 00:12:22.767 "zoned": false, 00:12:22.767 "supported_io_types": { 00:12:22.767 "read": true, 00:12:22.767 "write": true, 00:12:22.767 "unmap": false, 00:12:22.767 "flush": false, 00:12:22.767 "reset": true, 00:12:22.767 "nvme_admin": false, 00:12:22.767 "nvme_io": false, 00:12:22.767 "nvme_io_md": false, 00:12:22.767 "write_zeroes": true, 00:12:22.767 "zcopy": false, 00:12:22.767 "get_zone_info": false, 00:12:22.767 "zone_management": false, 00:12:22.767 "zone_append": false, 00:12:22.767 "compare": false, 00:12:22.767 "compare_and_write": false, 00:12:22.767 "abort": false, 00:12:22.767 "seek_hole": false, 00:12:22.767 "seek_data": false, 00:12:22.767 "copy": false, 00:12:22.767 "nvme_iov_md": false 00:12:22.767 }, 00:12:22.767 "memory_domains": [ 00:12:22.767 { 00:12:22.767 "dma_device_id": "system", 00:12:22.767 "dma_device_type": 1 00:12:22.767 }, 00:12:22.767 { 00:12:22.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.767 "dma_device_type": 2 00:12:22.767 }, 00:12:22.767 { 00:12:22.767 "dma_device_id": "system", 00:12:22.767 "dma_device_type": 1 00:12:22.767 }, 00:12:22.767 { 00:12:22.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.767 "dma_device_type": 2 00:12:22.767 } 00:12:22.767 ], 00:12:22.767 "driver_specific": { 00:12:22.767 "raid": { 00:12:22.767 "uuid": "e9156a4b-8d44-46e1-b527-dfec18ca30b7", 00:12:22.767 "strip_size_kb": 0, 00:12:22.767 "state": "online", 00:12:22.767 "raid_level": "raid1", 00:12:22.767 "superblock": true, 00:12:22.767 "num_base_bdevs": 2, 00:12:22.767 "num_base_bdevs_discovered": 2, 00:12:22.767 "num_base_bdevs_operational": 2, 00:12:22.767 "base_bdevs_list": [ 00:12:22.767 { 00:12:22.767 "name": "BaseBdev1", 00:12:22.767 "uuid": "22f6bce0-ebe0-4562-bbe9-f1573c74611f", 00:12:22.767 "is_configured": true, 00:12:22.767 "data_offset": 2048, 00:12:22.767 "data_size": 63488 00:12:22.767 }, 00:12:22.767 { 00:12:22.767 "name": "BaseBdev2", 00:12:22.767 "uuid": "afe7d360-3fa7-4712-a05a-1cf81e2e960e", 00:12:22.767 "is_configured": true, 00:12:22.767 "data_offset": 2048, 00:12:22.767 "data_size": 63488 00:12:22.767 } 00:12:22.767 ] 00:12:22.767 } 00:12:22.767 } 00:12:22.767 }' 00:12:22.767 10:40:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:22.767 BaseBdev2' 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.767 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.767 [2024-10-30 10:40:44.226918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.026 "name": "Existed_Raid", 00:12:23.026 "uuid": "e9156a4b-8d44-46e1-b527-dfec18ca30b7", 00:12:23.026 "strip_size_kb": 0, 00:12:23.026 "state": "online", 00:12:23.026 "raid_level": "raid1", 00:12:23.026 "superblock": true, 00:12:23.026 "num_base_bdevs": 2, 00:12:23.026 "num_base_bdevs_discovered": 1, 00:12:23.026 "num_base_bdevs_operational": 1, 00:12:23.026 "base_bdevs_list": [ 00:12:23.026 { 00:12:23.026 "name": null, 00:12:23.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.026 "is_configured": false, 00:12:23.026 "data_offset": 0, 00:12:23.026 "data_size": 63488 00:12:23.026 }, 00:12:23.026 { 00:12:23.026 "name": "BaseBdev2", 00:12:23.026 "uuid": "afe7d360-3fa7-4712-a05a-1cf81e2e960e", 00:12:23.026 "is_configured": true, 00:12:23.026 "data_offset": 2048, 00:12:23.026 "data_size": 63488 00:12:23.026 } 00:12:23.026 ] 00:12:23.026 }' 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.026 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.594 [2024-10-30 10:40:44.886766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:23.594 [2024-10-30 10:40:44.886906] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.594 [2024-10-30 10:40:44.970336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.594 [2024-10-30 10:40:44.970412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.594 [2024-10-30 10:40:44.970432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.594 10:40:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63104 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63104 ']' 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63104 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63104 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:23.594 killing process with pid 63104 00:12:23.594 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63104' 00:12:23.595 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63104 00:12:23.595 [2024-10-30 10:40:45.060081] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.595 10:40:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63104 00:12:23.853 [2024-10-30 10:40:45.075041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.789 10:40:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:24.789 00:12:24.789 real 0m5.413s 00:12:24.789 user 0m8.152s 00:12:24.789 sys 0m0.815s 00:12:24.789 10:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:24.789 ************************************ 00:12:24.789 END TEST raid_state_function_test_sb 00:12:24.789 ************************************ 00:12:24.789 10:40:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.789 10:40:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:12:24.789 10:40:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:24.789 10:40:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:24.789 10:40:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.789 ************************************ 00:12:24.789 START TEST raid_superblock_test 00:12:24.789 ************************************ 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63362 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63362 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63362 ']' 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:24.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:24.789 10:40:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.789 [2024-10-30 10:40:46.255157] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:12:24.789 [2024-10-30 10:40:46.255371] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63362 ] 00:12:25.048 [2024-10-30 10:40:46.437373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.306 [2024-10-30 10:40:46.564079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.306 [2024-10-30 10:40:46.767171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.306 [2024-10-30 10:40:46.767259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.874 malloc1 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.874 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.874 [2024-10-30 10:40:47.299439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:25.874 [2024-10-30 10:40:47.299569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.874 [2024-10-30 10:40:47.299610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:25.874 [2024-10-30 10:40:47.299631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.874 [2024-10-30 10:40:47.302807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.874 [2024-10-30 10:40:47.302855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:25.874 pt1 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 malloc2 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 [2024-10-30 10:40:47.355435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.135 [2024-10-30 10:40:47.355514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.135 [2024-10-30 10:40:47.355558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:26.135 [2024-10-30 10:40:47.355582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.135 [2024-10-30 10:40:47.358671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.135 [2024-10-30 10:40:47.358721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.135 pt2 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 [2024-10-30 10:40:47.367671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.135 [2024-10-30 10:40:47.370130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.135 [2024-10-30 10:40:47.370402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:26.135 [2024-10-30 10:40:47.370439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.135 [2024-10-30 10:40:47.370814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:26.135 [2024-10-30 10:40:47.371099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:26.135 [2024-10-30 10:40:47.371145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:26.135 [2024-10-30 10:40:47.371418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.135 "name": "raid_bdev1", 00:12:26.135 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:26.135 "strip_size_kb": 0, 00:12:26.135 "state": "online", 00:12:26.135 "raid_level": "raid1", 00:12:26.135 "superblock": true, 00:12:26.135 "num_base_bdevs": 2, 00:12:26.135 "num_base_bdevs_discovered": 2, 00:12:26.135 "num_base_bdevs_operational": 2, 00:12:26.135 "base_bdevs_list": [ 00:12:26.135 { 00:12:26.135 "name": "pt1", 00:12:26.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.135 "is_configured": true, 00:12:26.135 "data_offset": 2048, 00:12:26.135 "data_size": 63488 00:12:26.135 }, 00:12:26.135 { 00:12:26.135 "name": "pt2", 00:12:26.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.135 "is_configured": true, 00:12:26.135 "data_offset": 2048, 00:12:26.135 "data_size": 63488 00:12:26.135 } 00:12:26.135 ] 00:12:26.135 }' 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.135 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.702 [2024-10-30 10:40:47.884147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:26.702 "name": "raid_bdev1", 00:12:26.702 "aliases": [ 00:12:26.702 "c97aaa32-7576-45b1-8a6b-21054bedcdd8" 00:12:26.702 ], 00:12:26.702 "product_name": "Raid Volume", 00:12:26.702 "block_size": 512, 00:12:26.702 "num_blocks": 63488, 00:12:26.702 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:26.702 "assigned_rate_limits": { 00:12:26.702 "rw_ios_per_sec": 0, 00:12:26.702 "rw_mbytes_per_sec": 0, 00:12:26.702 "r_mbytes_per_sec": 0, 00:12:26.702 "w_mbytes_per_sec": 0 00:12:26.702 }, 00:12:26.702 "claimed": false, 00:12:26.702 "zoned": false, 00:12:26.702 "supported_io_types": { 00:12:26.702 "read": true, 00:12:26.702 "write": true, 00:12:26.702 "unmap": false, 00:12:26.702 "flush": false, 00:12:26.702 "reset": true, 00:12:26.702 "nvme_admin": false, 00:12:26.702 "nvme_io": false, 00:12:26.702 "nvme_io_md": false, 00:12:26.702 "write_zeroes": true, 00:12:26.702 "zcopy": false, 00:12:26.702 "get_zone_info": false, 00:12:26.702 "zone_management": false, 00:12:26.702 "zone_append": false, 00:12:26.702 "compare": false, 00:12:26.702 "compare_and_write": false, 00:12:26.702 "abort": false, 00:12:26.702 "seek_hole": false, 00:12:26.702 "seek_data": false, 00:12:26.702 "copy": false, 00:12:26.702 "nvme_iov_md": false 00:12:26.702 }, 00:12:26.702 "memory_domains": [ 00:12:26.702 { 00:12:26.702 "dma_device_id": "system", 00:12:26.702 "dma_device_type": 1 00:12:26.702 }, 00:12:26.702 { 00:12:26.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.702 "dma_device_type": 2 00:12:26.702 }, 00:12:26.702 { 00:12:26.702 "dma_device_id": "system", 00:12:26.702 "dma_device_type": 1 00:12:26.702 }, 00:12:26.702 { 00:12:26.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.702 "dma_device_type": 2 00:12:26.702 } 00:12:26.702 ], 00:12:26.702 "driver_specific": { 00:12:26.702 "raid": { 00:12:26.702 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:26.702 "strip_size_kb": 0, 00:12:26.702 "state": "online", 00:12:26.702 "raid_level": "raid1", 00:12:26.702 "superblock": true, 00:12:26.702 "num_base_bdevs": 2, 00:12:26.702 "num_base_bdevs_discovered": 2, 00:12:26.702 "num_base_bdevs_operational": 2, 00:12:26.702 "base_bdevs_list": [ 00:12:26.702 { 00:12:26.702 "name": "pt1", 00:12:26.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.702 "is_configured": true, 00:12:26.702 "data_offset": 2048, 00:12:26.702 "data_size": 63488 00:12:26.702 }, 00:12:26.702 { 00:12:26.702 "name": "pt2", 00:12:26.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.702 "is_configured": true, 00:12:26.702 "data_offset": 2048, 00:12:26.702 "data_size": 63488 00:12:26.702 } 00:12:26.702 ] 00:12:26.702 } 00:12:26.702 } 00:12:26.702 }' 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:26.702 pt2' 00:12:26.702 10:40:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.702 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:26.702 [2024-10-30 10:40:48.164204] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c97aaa32-7576-45b1-8a6b-21054bedcdd8 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c97aaa32-7576-45b1-8a6b-21054bedcdd8 ']' 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.961 [2024-10-30 10:40:48.223847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.961 [2024-10-30 10:40:48.223888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.961 [2024-10-30 10:40:48.224061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.961 [2024-10-30 10:40:48.224189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.961 [2024-10-30 10:40:48.224233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:26.961 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.962 [2024-10-30 10:40:48.363922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:26.962 [2024-10-30 10:40:48.366439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:26.962 [2024-10-30 10:40:48.366564] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:26.962 [2024-10-30 10:40:48.366684] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:26.962 [2024-10-30 10:40:48.366735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.962 [2024-10-30 10:40:48.366764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:26.962 request: 00:12:26.962 { 00:12:26.962 "name": "raid_bdev1", 00:12:26.962 "raid_level": "raid1", 00:12:26.962 "base_bdevs": [ 00:12:26.962 "malloc1", 00:12:26.962 "malloc2" 00:12:26.962 ], 00:12:26.962 "superblock": false, 00:12:26.962 "method": "bdev_raid_create", 00:12:26.962 "req_id": 1 00:12:26.962 } 00:12:26.962 Got JSON-RPC error response 00:12:26.962 response: 00:12:26.962 { 00:12:26.962 "code": -17, 00:12:26.962 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:26.962 } 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.962 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.220 [2024-10-30 10:40:48.431926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:27.220 [2024-10-30 10:40:48.432041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.220 [2024-10-30 10:40:48.432088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:27.220 [2024-10-30 10:40:48.432122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.220 [2024-10-30 10:40:48.435083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.220 [2024-10-30 10:40:48.435139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:27.220 [2024-10-30 10:40:48.435318] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:27.220 [2024-10-30 10:40:48.435443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:27.220 pt1 00:12:27.220 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.220 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:27.220 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.220 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.220 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.220 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.221 "name": "raid_bdev1", 00:12:27.221 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:27.221 "strip_size_kb": 0, 00:12:27.221 "state": "configuring", 00:12:27.221 "raid_level": "raid1", 00:12:27.221 "superblock": true, 00:12:27.221 "num_base_bdevs": 2, 00:12:27.221 "num_base_bdevs_discovered": 1, 00:12:27.221 "num_base_bdevs_operational": 2, 00:12:27.221 "base_bdevs_list": [ 00:12:27.221 { 00:12:27.221 "name": "pt1", 00:12:27.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.221 "is_configured": true, 00:12:27.221 "data_offset": 2048, 00:12:27.221 "data_size": 63488 00:12:27.221 }, 00:12:27.221 { 00:12:27.221 "name": null, 00:12:27.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.221 "is_configured": false, 00:12:27.221 "data_offset": 2048, 00:12:27.221 "data_size": 63488 00:12:27.221 } 00:12:27.221 ] 00:12:27.221 }' 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.221 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.789 [2024-10-30 10:40:48.964069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.789 [2024-10-30 10:40:48.964157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.789 [2024-10-30 10:40:48.964187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:27.789 [2024-10-30 10:40:48.964205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.789 [2024-10-30 10:40:48.964780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.789 [2024-10-30 10:40:48.964826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.789 [2024-10-30 10:40:48.964926] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.789 [2024-10-30 10:40:48.964964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.789 [2024-10-30 10:40:48.965130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:27.789 [2024-10-30 10:40:48.965161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.789 [2024-10-30 10:40:48.965461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:27.789 [2024-10-30 10:40:48.965673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:27.789 [2024-10-30 10:40:48.965698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:27.789 [2024-10-30 10:40:48.965866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.789 pt2 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.789 10:40:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.789 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.789 "name": "raid_bdev1", 00:12:27.789 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:27.789 "strip_size_kb": 0, 00:12:27.789 "state": "online", 00:12:27.789 "raid_level": "raid1", 00:12:27.789 "superblock": true, 00:12:27.789 "num_base_bdevs": 2, 00:12:27.789 "num_base_bdevs_discovered": 2, 00:12:27.789 "num_base_bdevs_operational": 2, 00:12:27.789 "base_bdevs_list": [ 00:12:27.789 { 00:12:27.789 "name": "pt1", 00:12:27.789 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.789 "is_configured": true, 00:12:27.789 "data_offset": 2048, 00:12:27.789 "data_size": 63488 00:12:27.789 }, 00:12:27.789 { 00:12:27.789 "name": "pt2", 00:12:27.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.789 "is_configured": true, 00:12:27.789 "data_offset": 2048, 00:12:27.789 "data_size": 63488 00:12:27.789 } 00:12:27.789 ] 00:12:27.789 }' 00:12:27.789 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.789 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.048 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.048 [2024-10-30 10:40:49.460484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.049 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.049 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.049 "name": "raid_bdev1", 00:12:28.049 "aliases": [ 00:12:28.049 "c97aaa32-7576-45b1-8a6b-21054bedcdd8" 00:12:28.049 ], 00:12:28.049 "product_name": "Raid Volume", 00:12:28.049 "block_size": 512, 00:12:28.049 "num_blocks": 63488, 00:12:28.049 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:28.049 "assigned_rate_limits": { 00:12:28.049 "rw_ios_per_sec": 0, 00:12:28.049 "rw_mbytes_per_sec": 0, 00:12:28.049 "r_mbytes_per_sec": 0, 00:12:28.049 "w_mbytes_per_sec": 0 00:12:28.049 }, 00:12:28.049 "claimed": false, 00:12:28.049 "zoned": false, 00:12:28.049 "supported_io_types": { 00:12:28.049 "read": true, 00:12:28.049 "write": true, 00:12:28.049 "unmap": false, 00:12:28.049 "flush": false, 00:12:28.049 "reset": true, 00:12:28.049 "nvme_admin": false, 00:12:28.049 "nvme_io": false, 00:12:28.049 "nvme_io_md": false, 00:12:28.049 "write_zeroes": true, 00:12:28.049 "zcopy": false, 00:12:28.049 "get_zone_info": false, 00:12:28.049 "zone_management": false, 00:12:28.049 "zone_append": false, 00:12:28.049 "compare": false, 00:12:28.049 "compare_and_write": false, 00:12:28.049 "abort": false, 00:12:28.049 "seek_hole": false, 00:12:28.049 "seek_data": false, 00:12:28.049 "copy": false, 00:12:28.049 "nvme_iov_md": false 00:12:28.049 }, 00:12:28.049 "memory_domains": [ 00:12:28.049 { 00:12:28.049 "dma_device_id": "system", 00:12:28.049 "dma_device_type": 1 00:12:28.049 }, 00:12:28.049 { 00:12:28.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.049 "dma_device_type": 2 00:12:28.049 }, 00:12:28.049 { 00:12:28.049 "dma_device_id": "system", 00:12:28.049 "dma_device_type": 1 00:12:28.049 }, 00:12:28.049 { 00:12:28.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.049 "dma_device_type": 2 00:12:28.049 } 00:12:28.049 ], 00:12:28.049 "driver_specific": { 00:12:28.049 "raid": { 00:12:28.049 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:28.049 "strip_size_kb": 0, 00:12:28.049 "state": "online", 00:12:28.049 "raid_level": "raid1", 00:12:28.049 "superblock": true, 00:12:28.049 "num_base_bdevs": 2, 00:12:28.049 "num_base_bdevs_discovered": 2, 00:12:28.049 "num_base_bdevs_operational": 2, 00:12:28.049 "base_bdevs_list": [ 00:12:28.049 { 00:12:28.049 "name": "pt1", 00:12:28.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.049 "is_configured": true, 00:12:28.049 "data_offset": 2048, 00:12:28.049 "data_size": 63488 00:12:28.049 }, 00:12:28.049 { 00:12:28.049 "name": "pt2", 00:12:28.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.049 "is_configured": true, 00:12:28.049 "data_offset": 2048, 00:12:28.049 "data_size": 63488 00:12:28.049 } 00:12:28.049 ] 00:12:28.049 } 00:12:28.049 } 00:12:28.049 }' 00:12:28.049 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:28.318 pt2' 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.318 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.319 [2024-10-30 10:40:49.728545] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c97aaa32-7576-45b1-8a6b-21054bedcdd8 '!=' c97aaa32-7576-45b1-8a6b-21054bedcdd8 ']' 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.319 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.580 [2024-10-30 10:40:49.788280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.580 "name": "raid_bdev1", 00:12:28.580 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:28.580 "strip_size_kb": 0, 00:12:28.580 "state": "online", 00:12:28.580 "raid_level": "raid1", 00:12:28.580 "superblock": true, 00:12:28.580 "num_base_bdevs": 2, 00:12:28.580 "num_base_bdevs_discovered": 1, 00:12:28.580 "num_base_bdevs_operational": 1, 00:12:28.580 "base_bdevs_list": [ 00:12:28.580 { 00:12:28.580 "name": null, 00:12:28.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.580 "is_configured": false, 00:12:28.580 "data_offset": 0, 00:12:28.580 "data_size": 63488 00:12:28.580 }, 00:12:28.580 { 00:12:28.580 "name": "pt2", 00:12:28.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.580 "is_configured": true, 00:12:28.580 "data_offset": 2048, 00:12:28.580 "data_size": 63488 00:12:28.580 } 00:12:28.580 ] 00:12:28.580 }' 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.580 10:40:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.838 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:28.838 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.838 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.838 [2024-10-30 10:40:50.304400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.838 [2024-10-30 10:40:50.304433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.838 [2024-10-30 10:40:50.304524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.838 [2024-10-30 10:40:50.304591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.838 [2024-10-30 10:40:50.304610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.097 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.097 [2024-10-30 10:40:50.380396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:29.097 [2024-10-30 10:40:50.380478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.097 [2024-10-30 10:40:50.380506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:29.097 [2024-10-30 10:40:50.380523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.097 [2024-10-30 10:40:50.383387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.098 [2024-10-30 10:40:50.383439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:29.098 [2024-10-30 10:40:50.383536] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:29.098 [2024-10-30 10:40:50.383596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.098 [2024-10-30 10:40:50.383719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:29.098 [2024-10-30 10:40:50.383742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.098 [2024-10-30 10:40:50.384044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:29.098 [2024-10-30 10:40:50.384237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:29.098 [2024-10-30 10:40:50.384253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:29.098 [2024-10-30 10:40:50.384464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.098 pt2 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.098 "name": "raid_bdev1", 00:12:29.098 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:29.098 "strip_size_kb": 0, 00:12:29.098 "state": "online", 00:12:29.098 "raid_level": "raid1", 00:12:29.098 "superblock": true, 00:12:29.098 "num_base_bdevs": 2, 00:12:29.098 "num_base_bdevs_discovered": 1, 00:12:29.098 "num_base_bdevs_operational": 1, 00:12:29.098 "base_bdevs_list": [ 00:12:29.098 { 00:12:29.098 "name": null, 00:12:29.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.098 "is_configured": false, 00:12:29.098 "data_offset": 2048, 00:12:29.098 "data_size": 63488 00:12:29.098 }, 00:12:29.098 { 00:12:29.098 "name": "pt2", 00:12:29.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.098 "is_configured": true, 00:12:29.098 "data_offset": 2048, 00:12:29.098 "data_size": 63488 00:12:29.098 } 00:12:29.098 ] 00:12:29.098 }' 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.098 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.665 [2024-10-30 10:40:50.896522] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.665 [2024-10-30 10:40:50.896558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.665 [2024-10-30 10:40:50.896644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.665 [2024-10-30 10:40:50.896710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.665 [2024-10-30 10:40:50.896726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.665 [2024-10-30 10:40:50.960575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:29.665 [2024-10-30 10:40:50.960654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.665 [2024-10-30 10:40:50.960687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:29.665 [2024-10-30 10:40:50.960703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.665 [2024-10-30 10:40:50.963647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.665 [2024-10-30 10:40:50.963825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:29.665 [2024-10-30 10:40:50.963951] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:29.665 [2024-10-30 10:40:50.964038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:29.665 [2024-10-30 10:40:50.964211] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:29.665 [2024-10-30 10:40:50.964229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.665 [2024-10-30 10:40:50.964252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:29.665 [2024-10-30 10:40:50.964321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:29.665 [2024-10-30 10:40:50.964424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:29.665 [2024-10-30 10:40:50.964439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.665 [2024-10-30 10:40:50.964757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:29.665 [2024-10-30 10:40:50.964935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:29.665 [2024-10-30 10:40:50.964956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:29.665 [2024-10-30 10:40:50.965214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.665 pt1 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.665 10:40:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.665 10:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.665 "name": "raid_bdev1", 00:12:29.665 "uuid": "c97aaa32-7576-45b1-8a6b-21054bedcdd8", 00:12:29.665 "strip_size_kb": 0, 00:12:29.665 "state": "online", 00:12:29.665 "raid_level": "raid1", 00:12:29.665 "superblock": true, 00:12:29.665 "num_base_bdevs": 2, 00:12:29.665 "num_base_bdevs_discovered": 1, 00:12:29.665 "num_base_bdevs_operational": 1, 00:12:29.665 "base_bdevs_list": [ 00:12:29.665 { 00:12:29.665 "name": null, 00:12:29.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.665 "is_configured": false, 00:12:29.665 "data_offset": 2048, 00:12:29.665 "data_size": 63488 00:12:29.665 }, 00:12:29.665 { 00:12:29.665 "name": "pt2", 00:12:29.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.665 "is_configured": true, 00:12:29.665 "data_offset": 2048, 00:12:29.665 "data_size": 63488 00:12:29.665 } 00:12:29.665 ] 00:12:29.665 }' 00:12:29.665 10:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.665 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.233 [2024-10-30 10:40:51.521544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c97aaa32-7576-45b1-8a6b-21054bedcdd8 '!=' c97aaa32-7576-45b1-8a6b-21054bedcdd8 ']' 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63362 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63362 ']' 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63362 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63362 00:12:30.233 killing process with pid 63362 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63362' 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63362 00:12:30.233 [2024-10-30 10:40:51.593693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.233 10:40:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63362 00:12:30.233 [2024-10-30 10:40:51.593812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.233 [2024-10-30 10:40:51.593889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.233 [2024-10-30 10:40:51.593912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:30.492 [2024-10-30 10:40:51.781521] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.436 10:40:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:31.436 00:12:31.436 real 0m6.657s 00:12:31.436 user 0m10.621s 00:12:31.436 sys 0m0.891s 00:12:31.436 10:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:31.436 10:40:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.436 ************************************ 00:12:31.436 END TEST raid_superblock_test 00:12:31.436 ************************************ 00:12:31.436 10:40:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:12:31.436 10:40:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:31.436 10:40:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:31.436 10:40:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.436 ************************************ 00:12:31.436 START TEST raid_read_error_test 00:12:31.436 ************************************ 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BhXedFPNmn 00:12:31.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63697 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63697 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63697 ']' 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:31.436 10:40:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.695 [2024-10-30 10:40:52.986738] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:12:31.695 [2024-10-30 10:40:52.987017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63697 ] 00:12:31.953 [2024-10-30 10:40:53.179142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.953 [2024-10-30 10:40:53.305589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.211 [2024-10-30 10:40:53.506510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.211 [2024-10-30 10:40:53.506582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.778 10:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:32.778 10:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:32.778 10:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.778 10:40:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.778 10:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.778 10:40:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 BaseBdev1_malloc 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 true 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 [2024-10-30 10:40:54.037642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:32.778 [2024-10-30 10:40:54.037847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.778 [2024-10-30 10:40:54.037888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:32.778 [2024-10-30 10:40:54.037909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.778 [2024-10-30 10:40:54.040768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.778 [2024-10-30 10:40:54.040820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.778 BaseBdev1 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 BaseBdev2_malloc 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 true 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 [2024-10-30 10:40:54.093582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:32.778 [2024-10-30 10:40:54.093652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.778 [2024-10-30 10:40:54.093678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:32.778 [2024-10-30 10:40:54.093696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.778 [2024-10-30 10:40:54.096488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.778 [2024-10-30 10:40:54.096672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.778 BaseBdev2 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 [2024-10-30 10:40:54.101657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.778 [2024-10-30 10:40:54.104218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.778 [2024-10-30 10:40:54.104582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:32.778 [2024-10-30 10:40:54.104715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.778 [2024-10-30 10:40:54.105131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:32.778 [2024-10-30 10:40:54.105486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:32.778 [2024-10-30 10:40:54.105605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:32.778 [2024-10-30 10:40:54.105987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.778 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.778 "name": "raid_bdev1", 00:12:32.778 "uuid": "978a3428-d04c-41f2-a833-9cfc73eb44e2", 00:12:32.778 "strip_size_kb": 0, 00:12:32.778 "state": "online", 00:12:32.778 "raid_level": "raid1", 00:12:32.778 "superblock": true, 00:12:32.778 "num_base_bdevs": 2, 00:12:32.779 "num_base_bdevs_discovered": 2, 00:12:32.779 "num_base_bdevs_operational": 2, 00:12:32.779 "base_bdevs_list": [ 00:12:32.779 { 00:12:32.779 "name": "BaseBdev1", 00:12:32.779 "uuid": "20c2ca5b-8fd7-5397-a1fb-50c1e1ce4394", 00:12:32.779 "is_configured": true, 00:12:32.779 "data_offset": 2048, 00:12:32.779 "data_size": 63488 00:12:32.779 }, 00:12:32.779 { 00:12:32.779 "name": "BaseBdev2", 00:12:32.779 "uuid": "2aded9a7-5a2c-5505-ac33-0fe74151106d", 00:12:32.779 "is_configured": true, 00:12:32.779 "data_offset": 2048, 00:12:32.779 "data_size": 63488 00:12:32.779 } 00:12:32.779 ] 00:12:32.779 }' 00:12:32.779 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.779 10:40:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.344 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:33.344 10:40:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:33.344 [2024-10-30 10:40:54.735460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.278 "name": "raid_bdev1", 00:12:34.278 "uuid": "978a3428-d04c-41f2-a833-9cfc73eb44e2", 00:12:34.278 "strip_size_kb": 0, 00:12:34.278 "state": "online", 00:12:34.278 "raid_level": "raid1", 00:12:34.278 "superblock": true, 00:12:34.278 "num_base_bdevs": 2, 00:12:34.278 "num_base_bdevs_discovered": 2, 00:12:34.278 "num_base_bdevs_operational": 2, 00:12:34.278 "base_bdevs_list": [ 00:12:34.278 { 00:12:34.278 "name": "BaseBdev1", 00:12:34.278 "uuid": "20c2ca5b-8fd7-5397-a1fb-50c1e1ce4394", 00:12:34.278 "is_configured": true, 00:12:34.278 "data_offset": 2048, 00:12:34.278 "data_size": 63488 00:12:34.278 }, 00:12:34.278 { 00:12:34.278 "name": "BaseBdev2", 00:12:34.278 "uuid": "2aded9a7-5a2c-5505-ac33-0fe74151106d", 00:12:34.278 "is_configured": true, 00:12:34.278 "data_offset": 2048, 00:12:34.278 "data_size": 63488 00:12:34.278 } 00:12:34.278 ] 00:12:34.278 }' 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.278 10:40:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.844 [2024-10-30 10:40:56.193151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.844 [2024-10-30 10:40:56.193322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.844 [2024-10-30 10:40:56.196751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.844 { 00:12:34.844 "results": [ 00:12:34.844 { 00:12:34.844 "job": "raid_bdev1", 00:12:34.844 "core_mask": "0x1", 00:12:34.844 "workload": "randrw", 00:12:34.844 "percentage": 50, 00:12:34.844 "status": "finished", 00:12:34.844 "queue_depth": 1, 00:12:34.844 "io_size": 131072, 00:12:34.844 "runtime": 1.455357, 00:12:34.844 "iops": 12848.39389922885, 00:12:34.844 "mibps": 1606.0492374036062, 00:12:34.844 "io_failed": 0, 00:12:34.844 "io_timeout": 0, 00:12:34.844 "avg_latency_us": 73.58847813932684, 00:12:34.844 "min_latency_us": 42.589090909090906, 00:12:34.844 "max_latency_us": 2025.658181818182 00:12:34.844 } 00:12:34.844 ], 00:12:34.844 "core_count": 1 00:12:34.844 } 00:12:34.844 [2024-10-30 10:40:56.196936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.844 [2024-10-30 10:40:56.197147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.844 [2024-10-30 10:40:56.197175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63697 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63697 ']' 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63697 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63697 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63697' 00:12:34.844 killing process with pid 63697 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63697 00:12:34.844 [2024-10-30 10:40:56.226138] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.844 10:40:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63697 00:12:35.104 [2024-10-30 10:40:56.344226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BhXedFPNmn 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:36.055 ************************************ 00:12:36.055 END TEST raid_read_error_test 00:12:36.055 ************************************ 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:36.055 00:12:36.055 real 0m4.569s 00:12:36.055 user 0m5.755s 00:12:36.055 sys 0m0.563s 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:36.055 10:40:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.055 10:40:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:12:36.055 10:40:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:36.055 10:40:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:36.055 10:40:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.055 ************************************ 00:12:36.055 START TEST raid_write_error_test 00:12:36.055 ************************************ 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3Sv99HLto0 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63843 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63843 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63843 ']' 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:36.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:36.055 10:40:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.314 [2024-10-30 10:40:57.584953] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:12:36.314 [2024-10-30 10:40:57.585166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63843 ] 00:12:36.314 [2024-10-30 10:40:57.775763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.572 [2024-10-30 10:40:57.928107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.830 [2024-10-30 10:40:58.143162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.830 [2024-10-30 10:40:58.143213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.398 BaseBdev1_malloc 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.398 true 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.398 [2024-10-30 10:40:58.624248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:37.398 [2024-10-30 10:40:58.624472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.398 [2024-10-30 10:40:58.624513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:37.398 [2024-10-30 10:40:58.624533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.398 [2024-10-30 10:40:58.627369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.398 [2024-10-30 10:40:58.627422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:37.398 BaseBdev1 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.398 BaseBdev2_malloc 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.398 true 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.398 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.398 [2024-10-30 10:40:58.680318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:37.398 [2024-10-30 10:40:58.680388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.398 [2024-10-30 10:40:58.680415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:37.398 [2024-10-30 10:40:58.680433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.398 [2024-10-30 10:40:58.683238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.398 [2024-10-30 10:40:58.683289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:37.399 BaseBdev2 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.399 [2024-10-30 10:40:58.688391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.399 [2024-10-30 10:40:58.690839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.399 [2024-10-30 10:40:58.691122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:37.399 [2024-10-30 10:40:58.691147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:37.399 [2024-10-30 10:40:58.691458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:37.399 [2024-10-30 10:40:58.691701] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:37.399 [2024-10-30 10:40:58.691719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:37.399 [2024-10-30 10:40:58.691910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.399 "name": "raid_bdev1", 00:12:37.399 "uuid": "34de5006-6032-4298-81fd-8a880a85789c", 00:12:37.399 "strip_size_kb": 0, 00:12:37.399 "state": "online", 00:12:37.399 "raid_level": "raid1", 00:12:37.399 "superblock": true, 00:12:37.399 "num_base_bdevs": 2, 00:12:37.399 "num_base_bdevs_discovered": 2, 00:12:37.399 "num_base_bdevs_operational": 2, 00:12:37.399 "base_bdevs_list": [ 00:12:37.399 { 00:12:37.399 "name": "BaseBdev1", 00:12:37.399 "uuid": "b2db0678-0984-54da-beb3-fcd7fe35eb40", 00:12:37.399 "is_configured": true, 00:12:37.399 "data_offset": 2048, 00:12:37.399 "data_size": 63488 00:12:37.399 }, 00:12:37.399 { 00:12:37.399 "name": "BaseBdev2", 00:12:37.399 "uuid": "911caa51-1631-5e2e-a2fd-349e1576fb8c", 00:12:37.399 "is_configured": true, 00:12:37.399 "data_offset": 2048, 00:12:37.399 "data_size": 63488 00:12:37.399 } 00:12:37.399 ] 00:12:37.399 }' 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.399 10:40:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.967 10:40:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:37.967 10:40:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:37.967 [2024-10-30 10:40:59.285973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.901 [2024-10-30 10:41:00.170994] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:38.901 [2024-10-30 10:41:00.171082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.901 [2024-10-30 10:41:00.171315] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.901 "name": "raid_bdev1", 00:12:38.901 "uuid": "34de5006-6032-4298-81fd-8a880a85789c", 00:12:38.901 "strip_size_kb": 0, 00:12:38.901 "state": "online", 00:12:38.901 "raid_level": "raid1", 00:12:38.901 "superblock": true, 00:12:38.901 "num_base_bdevs": 2, 00:12:38.901 "num_base_bdevs_discovered": 1, 00:12:38.901 "num_base_bdevs_operational": 1, 00:12:38.901 "base_bdevs_list": [ 00:12:38.901 { 00:12:38.901 "name": null, 00:12:38.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.901 "is_configured": false, 00:12:38.901 "data_offset": 0, 00:12:38.901 "data_size": 63488 00:12:38.901 }, 00:12:38.901 { 00:12:38.901 "name": "BaseBdev2", 00:12:38.901 "uuid": "911caa51-1631-5e2e-a2fd-349e1576fb8c", 00:12:38.901 "is_configured": true, 00:12:38.901 "data_offset": 2048, 00:12:38.901 "data_size": 63488 00:12:38.901 } 00:12:38.901 ] 00:12:38.901 }' 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.901 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.473 [2024-10-30 10:41:00.694494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.473 [2024-10-30 10:41:00.694532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.473 [2024-10-30 10:41:00.697853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.473 [2024-10-30 10:41:00.697906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.473 [2024-10-30 10:41:00.698003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.473 [2024-10-30 10:41:00.698024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.473 { 00:12:39.473 "results": [ 00:12:39.473 { 00:12:39.473 "job": "raid_bdev1", 00:12:39.473 "core_mask": "0x1", 00:12:39.473 "workload": "randrw", 00:12:39.473 "percentage": 50, 00:12:39.473 "status": "finished", 00:12:39.473 "queue_depth": 1, 00:12:39.473 "io_size": 131072, 00:12:39.473 "runtime": 1.405801, 00:12:39.473 "iops": 14087.342376339184, 00:12:39.473 "mibps": 1760.917797042398, 00:12:39.473 "io_failed": 0, 00:12:39.473 "io_timeout": 0, 00:12:39.473 "avg_latency_us": 66.68867336258975, 00:12:39.473 "min_latency_us": 41.658181818181816, 00:12:39.473 "max_latency_us": 1876.7127272727273 00:12:39.473 } 00:12:39.473 ], 00:12:39.473 "core_count": 1 00:12:39.473 } 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63843 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63843 ']' 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63843 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63843 00:12:39.473 killing process with pid 63843 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63843' 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63843 00:12:39.473 10:41:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63843 00:12:39.473 [2024-10-30 10:41:00.733713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.473 [2024-10-30 10:41:00.860083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3Sv99HLto0 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:40.850 00:12:40.850 real 0m4.496s 00:12:40.850 user 0m5.614s 00:12:40.850 sys 0m0.552s 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.850 10:41:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.850 ************************************ 00:12:40.850 END TEST raid_write_error_test 00:12:40.850 ************************************ 00:12:40.850 10:41:02 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:40.850 10:41:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:40.850 10:41:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:12:40.850 10:41:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:40.850 10:41:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.850 10:41:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.850 ************************************ 00:12:40.850 START TEST raid_state_function_test 00:12:40.850 ************************************ 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:40.850 Process raid pid: 63981 00:12:40.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63981 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63981' 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63981 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63981 ']' 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:40.850 10:41:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.850 [2024-10-30 10:41:02.128879] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:12:40.850 [2024-10-30 10:41:02.129357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.850 [2024-10-30 10:41:02.316787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.109 [2024-10-30 10:41:02.450287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.368 [2024-10-30 10:41:02.659898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.368 [2024-10-30 10:41:02.659957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.950 [2024-10-30 10:41:03.143971] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.950 [2024-10-30 10:41:03.144058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.950 [2024-10-30 10:41:03.144077] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.950 [2024-10-30 10:41:03.144094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.950 [2024-10-30 10:41:03.144104] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.950 [2024-10-30 10:41:03.144118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.950 "name": "Existed_Raid", 00:12:41.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.950 "strip_size_kb": 64, 00:12:41.950 "state": "configuring", 00:12:41.950 "raid_level": "raid0", 00:12:41.950 "superblock": false, 00:12:41.950 "num_base_bdevs": 3, 00:12:41.950 "num_base_bdevs_discovered": 0, 00:12:41.950 "num_base_bdevs_operational": 3, 00:12:41.950 "base_bdevs_list": [ 00:12:41.950 { 00:12:41.950 "name": "BaseBdev1", 00:12:41.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.950 "is_configured": false, 00:12:41.950 "data_offset": 0, 00:12:41.950 "data_size": 0 00:12:41.950 }, 00:12:41.950 { 00:12:41.950 "name": "BaseBdev2", 00:12:41.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.950 "is_configured": false, 00:12:41.950 "data_offset": 0, 00:12:41.950 "data_size": 0 00:12:41.950 }, 00:12:41.950 { 00:12:41.950 "name": "BaseBdev3", 00:12:41.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.950 "is_configured": false, 00:12:41.950 "data_offset": 0, 00:12:41.950 "data_size": 0 00:12:41.950 } 00:12:41.950 ] 00:12:41.950 }' 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.950 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.516 [2024-10-30 10:41:03.696068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.516 [2024-10-30 10:41:03.696112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.516 [2024-10-30 10:41:03.708054] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.516 [2024-10-30 10:41:03.708115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.516 [2024-10-30 10:41:03.708132] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.516 [2024-10-30 10:41:03.708147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.516 [2024-10-30 10:41:03.708157] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.516 [2024-10-30 10:41:03.708170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.516 [2024-10-30 10:41:03.754371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.516 BaseBdev1 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.516 [ 00:12:42.516 { 00:12:42.516 "name": "BaseBdev1", 00:12:42.516 "aliases": [ 00:12:42.516 "a1701165-8240-4802-96af-bd8ddb0a16e6" 00:12:42.516 ], 00:12:42.516 "product_name": "Malloc disk", 00:12:42.516 "block_size": 512, 00:12:42.516 "num_blocks": 65536, 00:12:42.516 "uuid": "a1701165-8240-4802-96af-bd8ddb0a16e6", 00:12:42.516 "assigned_rate_limits": { 00:12:42.516 "rw_ios_per_sec": 0, 00:12:42.516 "rw_mbytes_per_sec": 0, 00:12:42.516 "r_mbytes_per_sec": 0, 00:12:42.516 "w_mbytes_per_sec": 0 00:12:42.516 }, 00:12:42.516 "claimed": true, 00:12:42.516 "claim_type": "exclusive_write", 00:12:42.516 "zoned": false, 00:12:42.516 "supported_io_types": { 00:12:42.516 "read": true, 00:12:42.516 "write": true, 00:12:42.516 "unmap": true, 00:12:42.516 "flush": true, 00:12:42.516 "reset": true, 00:12:42.516 "nvme_admin": false, 00:12:42.516 "nvme_io": false, 00:12:42.516 "nvme_io_md": false, 00:12:42.516 "write_zeroes": true, 00:12:42.516 "zcopy": true, 00:12:42.516 "get_zone_info": false, 00:12:42.516 "zone_management": false, 00:12:42.516 "zone_append": false, 00:12:42.516 "compare": false, 00:12:42.516 "compare_and_write": false, 00:12:42.516 "abort": true, 00:12:42.516 "seek_hole": false, 00:12:42.516 "seek_data": false, 00:12:42.516 "copy": true, 00:12:42.516 "nvme_iov_md": false 00:12:42.516 }, 00:12:42.516 "memory_domains": [ 00:12:42.516 { 00:12:42.516 "dma_device_id": "system", 00:12:42.516 "dma_device_type": 1 00:12:42.516 }, 00:12:42.516 { 00:12:42.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.516 "dma_device_type": 2 00:12:42.516 } 00:12:42.516 ], 00:12:42.516 "driver_specific": {} 00:12:42.516 } 00:12:42.516 ] 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.516 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.517 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.517 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.517 "name": "Existed_Raid", 00:12:42.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.517 "strip_size_kb": 64, 00:12:42.517 "state": "configuring", 00:12:42.517 "raid_level": "raid0", 00:12:42.517 "superblock": false, 00:12:42.517 "num_base_bdevs": 3, 00:12:42.517 "num_base_bdevs_discovered": 1, 00:12:42.517 "num_base_bdevs_operational": 3, 00:12:42.517 "base_bdevs_list": [ 00:12:42.517 { 00:12:42.517 "name": "BaseBdev1", 00:12:42.517 "uuid": "a1701165-8240-4802-96af-bd8ddb0a16e6", 00:12:42.517 "is_configured": true, 00:12:42.517 "data_offset": 0, 00:12:42.517 "data_size": 65536 00:12:42.517 }, 00:12:42.517 { 00:12:42.517 "name": "BaseBdev2", 00:12:42.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.517 "is_configured": false, 00:12:42.517 "data_offset": 0, 00:12:42.517 "data_size": 0 00:12:42.517 }, 00:12:42.517 { 00:12:42.517 "name": "BaseBdev3", 00:12:42.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.517 "is_configured": false, 00:12:42.517 "data_offset": 0, 00:12:42.517 "data_size": 0 00:12:42.517 } 00:12:42.517 ] 00:12:42.517 }' 00:12:42.517 10:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.517 10:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.082 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.082 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.082 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.082 [2024-10-30 10:41:04.334617] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.082 [2024-10-30 10:41:04.334679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:43.082 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.083 [2024-10-30 10:41:04.342629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.083 [2024-10-30 10:41:04.345151] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.083 [2024-10-30 10:41:04.345242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.083 [2024-10-30 10:41:04.345434] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.083 [2024-10-30 10:41:04.345496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.083 "name": "Existed_Raid", 00:12:43.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.083 "strip_size_kb": 64, 00:12:43.083 "state": "configuring", 00:12:43.083 "raid_level": "raid0", 00:12:43.083 "superblock": false, 00:12:43.083 "num_base_bdevs": 3, 00:12:43.083 "num_base_bdevs_discovered": 1, 00:12:43.083 "num_base_bdevs_operational": 3, 00:12:43.083 "base_bdevs_list": [ 00:12:43.083 { 00:12:43.083 "name": "BaseBdev1", 00:12:43.083 "uuid": "a1701165-8240-4802-96af-bd8ddb0a16e6", 00:12:43.083 "is_configured": true, 00:12:43.083 "data_offset": 0, 00:12:43.083 "data_size": 65536 00:12:43.083 }, 00:12:43.083 { 00:12:43.083 "name": "BaseBdev2", 00:12:43.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.083 "is_configured": false, 00:12:43.083 "data_offset": 0, 00:12:43.083 "data_size": 0 00:12:43.083 }, 00:12:43.083 { 00:12:43.083 "name": "BaseBdev3", 00:12:43.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.083 "is_configured": false, 00:12:43.083 "data_offset": 0, 00:12:43.083 "data_size": 0 00:12:43.083 } 00:12:43.083 ] 00:12:43.083 }' 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.083 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.725 [2024-10-30 10:41:04.902176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.725 BaseBdev2 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.725 [ 00:12:43.725 { 00:12:43.725 "name": "BaseBdev2", 00:12:43.725 "aliases": [ 00:12:43.725 "17ef83b0-cfb3-4e42-8da0-4ebe2b8cf71f" 00:12:43.725 ], 00:12:43.725 "product_name": "Malloc disk", 00:12:43.725 "block_size": 512, 00:12:43.725 "num_blocks": 65536, 00:12:43.725 "uuid": "17ef83b0-cfb3-4e42-8da0-4ebe2b8cf71f", 00:12:43.725 "assigned_rate_limits": { 00:12:43.725 "rw_ios_per_sec": 0, 00:12:43.725 "rw_mbytes_per_sec": 0, 00:12:43.725 "r_mbytes_per_sec": 0, 00:12:43.725 "w_mbytes_per_sec": 0 00:12:43.725 }, 00:12:43.725 "claimed": true, 00:12:43.725 "claim_type": "exclusive_write", 00:12:43.725 "zoned": false, 00:12:43.725 "supported_io_types": { 00:12:43.725 "read": true, 00:12:43.725 "write": true, 00:12:43.725 "unmap": true, 00:12:43.725 "flush": true, 00:12:43.725 "reset": true, 00:12:43.725 "nvme_admin": false, 00:12:43.725 "nvme_io": false, 00:12:43.725 "nvme_io_md": false, 00:12:43.725 "write_zeroes": true, 00:12:43.725 "zcopy": true, 00:12:43.725 "get_zone_info": false, 00:12:43.725 "zone_management": false, 00:12:43.725 "zone_append": false, 00:12:43.725 "compare": false, 00:12:43.725 "compare_and_write": false, 00:12:43.725 "abort": true, 00:12:43.725 "seek_hole": false, 00:12:43.725 "seek_data": false, 00:12:43.725 "copy": true, 00:12:43.725 "nvme_iov_md": false 00:12:43.725 }, 00:12:43.725 "memory_domains": [ 00:12:43.725 { 00:12:43.725 "dma_device_id": "system", 00:12:43.725 "dma_device_type": 1 00:12:43.725 }, 00:12:43.725 { 00:12:43.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.725 "dma_device_type": 2 00:12:43.725 } 00:12:43.725 ], 00:12:43.725 "driver_specific": {} 00:12:43.725 } 00:12:43.725 ] 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.725 "name": "Existed_Raid", 00:12:43.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.725 "strip_size_kb": 64, 00:12:43.725 "state": "configuring", 00:12:43.725 "raid_level": "raid0", 00:12:43.725 "superblock": false, 00:12:43.725 "num_base_bdevs": 3, 00:12:43.725 "num_base_bdevs_discovered": 2, 00:12:43.725 "num_base_bdevs_operational": 3, 00:12:43.725 "base_bdevs_list": [ 00:12:43.725 { 00:12:43.725 "name": "BaseBdev1", 00:12:43.725 "uuid": "a1701165-8240-4802-96af-bd8ddb0a16e6", 00:12:43.725 "is_configured": true, 00:12:43.725 "data_offset": 0, 00:12:43.725 "data_size": 65536 00:12:43.725 }, 00:12:43.725 { 00:12:43.725 "name": "BaseBdev2", 00:12:43.725 "uuid": "17ef83b0-cfb3-4e42-8da0-4ebe2b8cf71f", 00:12:43.725 "is_configured": true, 00:12:43.725 "data_offset": 0, 00:12:43.725 "data_size": 65536 00:12:43.725 }, 00:12:43.725 { 00:12:43.725 "name": "BaseBdev3", 00:12:43.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.725 "is_configured": false, 00:12:43.725 "data_offset": 0, 00:12:43.725 "data_size": 0 00:12:43.725 } 00:12:43.725 ] 00:12:43.725 }' 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.725 10:41:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.290 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:44.290 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.290 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.290 [2024-10-30 10:41:05.508625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.290 [2024-10-30 10:41:05.508679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:44.290 [2024-10-30 10:41:05.508700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:44.290 [2024-10-30 10:41:05.509086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:44.290 [2024-10-30 10:41:05.509311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:44.290 [2024-10-30 10:41:05.509328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:44.290 [2024-10-30 10:41:05.509644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.290 BaseBdev3 00:12:44.290 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.290 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:44.290 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:44.290 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:44.290 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.291 [ 00:12:44.291 { 00:12:44.291 "name": "BaseBdev3", 00:12:44.291 "aliases": [ 00:12:44.291 "f6364694-20c9-412f-ac16-a5c922441e4d" 00:12:44.291 ], 00:12:44.291 "product_name": "Malloc disk", 00:12:44.291 "block_size": 512, 00:12:44.291 "num_blocks": 65536, 00:12:44.291 "uuid": "f6364694-20c9-412f-ac16-a5c922441e4d", 00:12:44.291 "assigned_rate_limits": { 00:12:44.291 "rw_ios_per_sec": 0, 00:12:44.291 "rw_mbytes_per_sec": 0, 00:12:44.291 "r_mbytes_per_sec": 0, 00:12:44.291 "w_mbytes_per_sec": 0 00:12:44.291 }, 00:12:44.291 "claimed": true, 00:12:44.291 "claim_type": "exclusive_write", 00:12:44.291 "zoned": false, 00:12:44.291 "supported_io_types": { 00:12:44.291 "read": true, 00:12:44.291 "write": true, 00:12:44.291 "unmap": true, 00:12:44.291 "flush": true, 00:12:44.291 "reset": true, 00:12:44.291 "nvme_admin": false, 00:12:44.291 "nvme_io": false, 00:12:44.291 "nvme_io_md": false, 00:12:44.291 "write_zeroes": true, 00:12:44.291 "zcopy": true, 00:12:44.291 "get_zone_info": false, 00:12:44.291 "zone_management": false, 00:12:44.291 "zone_append": false, 00:12:44.291 "compare": false, 00:12:44.291 "compare_and_write": false, 00:12:44.291 "abort": true, 00:12:44.291 "seek_hole": false, 00:12:44.291 "seek_data": false, 00:12:44.291 "copy": true, 00:12:44.291 "nvme_iov_md": false 00:12:44.291 }, 00:12:44.291 "memory_domains": [ 00:12:44.291 { 00:12:44.291 "dma_device_id": "system", 00:12:44.291 "dma_device_type": 1 00:12:44.291 }, 00:12:44.291 { 00:12:44.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.291 "dma_device_type": 2 00:12:44.291 } 00:12:44.291 ], 00:12:44.291 "driver_specific": {} 00:12:44.291 } 00:12:44.291 ] 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.291 "name": "Existed_Raid", 00:12:44.291 "uuid": "72604d7c-6fee-4e51-a2ec-392dd3aedd77", 00:12:44.291 "strip_size_kb": 64, 00:12:44.291 "state": "online", 00:12:44.291 "raid_level": "raid0", 00:12:44.291 "superblock": false, 00:12:44.291 "num_base_bdevs": 3, 00:12:44.291 "num_base_bdevs_discovered": 3, 00:12:44.291 "num_base_bdevs_operational": 3, 00:12:44.291 "base_bdevs_list": [ 00:12:44.291 { 00:12:44.291 "name": "BaseBdev1", 00:12:44.291 "uuid": "a1701165-8240-4802-96af-bd8ddb0a16e6", 00:12:44.291 "is_configured": true, 00:12:44.291 "data_offset": 0, 00:12:44.291 "data_size": 65536 00:12:44.291 }, 00:12:44.291 { 00:12:44.291 "name": "BaseBdev2", 00:12:44.291 "uuid": "17ef83b0-cfb3-4e42-8da0-4ebe2b8cf71f", 00:12:44.291 "is_configured": true, 00:12:44.291 "data_offset": 0, 00:12:44.291 "data_size": 65536 00:12:44.291 }, 00:12:44.291 { 00:12:44.291 "name": "BaseBdev3", 00:12:44.291 "uuid": "f6364694-20c9-412f-ac16-a5c922441e4d", 00:12:44.291 "is_configured": true, 00:12:44.291 "data_offset": 0, 00:12:44.291 "data_size": 65536 00:12:44.291 } 00:12:44.291 ] 00:12:44.291 }' 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.291 10:41:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.857 [2024-10-30 10:41:06.077239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.857 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.857 "name": "Existed_Raid", 00:12:44.857 "aliases": [ 00:12:44.857 "72604d7c-6fee-4e51-a2ec-392dd3aedd77" 00:12:44.857 ], 00:12:44.857 "product_name": "Raid Volume", 00:12:44.857 "block_size": 512, 00:12:44.857 "num_blocks": 196608, 00:12:44.857 "uuid": "72604d7c-6fee-4e51-a2ec-392dd3aedd77", 00:12:44.857 "assigned_rate_limits": { 00:12:44.857 "rw_ios_per_sec": 0, 00:12:44.857 "rw_mbytes_per_sec": 0, 00:12:44.857 "r_mbytes_per_sec": 0, 00:12:44.857 "w_mbytes_per_sec": 0 00:12:44.857 }, 00:12:44.857 "claimed": false, 00:12:44.857 "zoned": false, 00:12:44.857 "supported_io_types": { 00:12:44.857 "read": true, 00:12:44.857 "write": true, 00:12:44.857 "unmap": true, 00:12:44.857 "flush": true, 00:12:44.857 "reset": true, 00:12:44.857 "nvme_admin": false, 00:12:44.857 "nvme_io": false, 00:12:44.857 "nvme_io_md": false, 00:12:44.857 "write_zeroes": true, 00:12:44.857 "zcopy": false, 00:12:44.858 "get_zone_info": false, 00:12:44.858 "zone_management": false, 00:12:44.858 "zone_append": false, 00:12:44.858 "compare": false, 00:12:44.858 "compare_and_write": false, 00:12:44.858 "abort": false, 00:12:44.858 "seek_hole": false, 00:12:44.858 "seek_data": false, 00:12:44.858 "copy": false, 00:12:44.858 "nvme_iov_md": false 00:12:44.858 }, 00:12:44.858 "memory_domains": [ 00:12:44.858 { 00:12:44.858 "dma_device_id": "system", 00:12:44.858 "dma_device_type": 1 00:12:44.858 }, 00:12:44.858 { 00:12:44.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.858 "dma_device_type": 2 00:12:44.858 }, 00:12:44.858 { 00:12:44.858 "dma_device_id": "system", 00:12:44.858 "dma_device_type": 1 00:12:44.858 }, 00:12:44.858 { 00:12:44.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.858 "dma_device_type": 2 00:12:44.858 }, 00:12:44.858 { 00:12:44.858 "dma_device_id": "system", 00:12:44.858 "dma_device_type": 1 00:12:44.858 }, 00:12:44.858 { 00:12:44.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.858 "dma_device_type": 2 00:12:44.858 } 00:12:44.858 ], 00:12:44.858 "driver_specific": { 00:12:44.858 "raid": { 00:12:44.858 "uuid": "72604d7c-6fee-4e51-a2ec-392dd3aedd77", 00:12:44.858 "strip_size_kb": 64, 00:12:44.858 "state": "online", 00:12:44.858 "raid_level": "raid0", 00:12:44.858 "superblock": false, 00:12:44.858 "num_base_bdevs": 3, 00:12:44.858 "num_base_bdevs_discovered": 3, 00:12:44.858 "num_base_bdevs_operational": 3, 00:12:44.858 "base_bdevs_list": [ 00:12:44.858 { 00:12:44.858 "name": "BaseBdev1", 00:12:44.858 "uuid": "a1701165-8240-4802-96af-bd8ddb0a16e6", 00:12:44.858 "is_configured": true, 00:12:44.858 "data_offset": 0, 00:12:44.858 "data_size": 65536 00:12:44.858 }, 00:12:44.858 { 00:12:44.858 "name": "BaseBdev2", 00:12:44.858 "uuid": "17ef83b0-cfb3-4e42-8da0-4ebe2b8cf71f", 00:12:44.858 "is_configured": true, 00:12:44.858 "data_offset": 0, 00:12:44.858 "data_size": 65536 00:12:44.858 }, 00:12:44.858 { 00:12:44.858 "name": "BaseBdev3", 00:12:44.858 "uuid": "f6364694-20c9-412f-ac16-a5c922441e4d", 00:12:44.858 "is_configured": true, 00:12:44.858 "data_offset": 0, 00:12:44.858 "data_size": 65536 00:12:44.858 } 00:12:44.858 ] 00:12:44.858 } 00:12:44.858 } 00:12:44.858 }' 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:44.858 BaseBdev2 00:12:44.858 BaseBdev3' 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.858 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 [2024-10-30 10:41:06.392986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.116 [2024-10-30 10:41:06.393021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.116 [2024-10-30 10:41:06.393094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.116 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.116 "name": "Existed_Raid", 00:12:45.116 "uuid": "72604d7c-6fee-4e51-a2ec-392dd3aedd77", 00:12:45.116 "strip_size_kb": 64, 00:12:45.117 "state": "offline", 00:12:45.117 "raid_level": "raid0", 00:12:45.117 "superblock": false, 00:12:45.117 "num_base_bdevs": 3, 00:12:45.117 "num_base_bdevs_discovered": 2, 00:12:45.117 "num_base_bdevs_operational": 2, 00:12:45.117 "base_bdevs_list": [ 00:12:45.117 { 00:12:45.117 "name": null, 00:12:45.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.117 "is_configured": false, 00:12:45.117 "data_offset": 0, 00:12:45.117 "data_size": 65536 00:12:45.117 }, 00:12:45.117 { 00:12:45.117 "name": "BaseBdev2", 00:12:45.117 "uuid": "17ef83b0-cfb3-4e42-8da0-4ebe2b8cf71f", 00:12:45.117 "is_configured": true, 00:12:45.117 "data_offset": 0, 00:12:45.117 "data_size": 65536 00:12:45.117 }, 00:12:45.117 { 00:12:45.117 "name": "BaseBdev3", 00:12:45.117 "uuid": "f6364694-20c9-412f-ac16-a5c922441e4d", 00:12:45.117 "is_configured": true, 00:12:45.117 "data_offset": 0, 00:12:45.117 "data_size": 65536 00:12:45.117 } 00:12:45.117 ] 00:12:45.117 }' 00:12:45.117 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.117 10:41:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.684 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:45.684 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.684 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.684 10:41:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.684 [2024-10-30 10:41:07.060304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.684 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.944 [2024-10-30 10:41:07.213709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:45.944 [2024-10-30 10:41:07.213774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.944 BaseBdev2 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.944 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.203 [ 00:12:46.203 { 00:12:46.203 "name": "BaseBdev2", 00:12:46.203 "aliases": [ 00:12:46.203 "e1f14d0d-93f8-4494-8ec1-8c5ca2578081" 00:12:46.203 ], 00:12:46.203 "product_name": "Malloc disk", 00:12:46.203 "block_size": 512, 00:12:46.203 "num_blocks": 65536, 00:12:46.203 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:46.203 "assigned_rate_limits": { 00:12:46.203 "rw_ios_per_sec": 0, 00:12:46.203 "rw_mbytes_per_sec": 0, 00:12:46.203 "r_mbytes_per_sec": 0, 00:12:46.203 "w_mbytes_per_sec": 0 00:12:46.203 }, 00:12:46.203 "claimed": false, 00:12:46.203 "zoned": false, 00:12:46.203 "supported_io_types": { 00:12:46.203 "read": true, 00:12:46.203 "write": true, 00:12:46.203 "unmap": true, 00:12:46.203 "flush": true, 00:12:46.203 "reset": true, 00:12:46.203 "nvme_admin": false, 00:12:46.203 "nvme_io": false, 00:12:46.203 "nvme_io_md": false, 00:12:46.203 "write_zeroes": true, 00:12:46.203 "zcopy": true, 00:12:46.203 "get_zone_info": false, 00:12:46.203 "zone_management": false, 00:12:46.203 "zone_append": false, 00:12:46.203 "compare": false, 00:12:46.203 "compare_and_write": false, 00:12:46.203 "abort": true, 00:12:46.203 "seek_hole": false, 00:12:46.203 "seek_data": false, 00:12:46.203 "copy": true, 00:12:46.203 "nvme_iov_md": false 00:12:46.203 }, 00:12:46.203 "memory_domains": [ 00:12:46.203 { 00:12:46.203 "dma_device_id": "system", 00:12:46.203 "dma_device_type": 1 00:12:46.203 }, 00:12:46.203 { 00:12:46.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.203 "dma_device_type": 2 00:12:46.203 } 00:12:46.203 ], 00:12:46.203 "driver_specific": {} 00:12:46.203 } 00:12:46.203 ] 00:12:46.203 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.204 BaseBdev3 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.204 [ 00:12:46.204 { 00:12:46.204 "name": "BaseBdev3", 00:12:46.204 "aliases": [ 00:12:46.204 "984a08e0-2d91-4f14-97a6-58cdb4e25421" 00:12:46.204 ], 00:12:46.204 "product_name": "Malloc disk", 00:12:46.204 "block_size": 512, 00:12:46.204 "num_blocks": 65536, 00:12:46.204 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:46.204 "assigned_rate_limits": { 00:12:46.204 "rw_ios_per_sec": 0, 00:12:46.204 "rw_mbytes_per_sec": 0, 00:12:46.204 "r_mbytes_per_sec": 0, 00:12:46.204 "w_mbytes_per_sec": 0 00:12:46.204 }, 00:12:46.204 "claimed": false, 00:12:46.204 "zoned": false, 00:12:46.204 "supported_io_types": { 00:12:46.204 "read": true, 00:12:46.204 "write": true, 00:12:46.204 "unmap": true, 00:12:46.204 "flush": true, 00:12:46.204 "reset": true, 00:12:46.204 "nvme_admin": false, 00:12:46.204 "nvme_io": false, 00:12:46.204 "nvme_io_md": false, 00:12:46.204 "write_zeroes": true, 00:12:46.204 "zcopy": true, 00:12:46.204 "get_zone_info": false, 00:12:46.204 "zone_management": false, 00:12:46.204 "zone_append": false, 00:12:46.204 "compare": false, 00:12:46.204 "compare_and_write": false, 00:12:46.204 "abort": true, 00:12:46.204 "seek_hole": false, 00:12:46.204 "seek_data": false, 00:12:46.204 "copy": true, 00:12:46.204 "nvme_iov_md": false 00:12:46.204 }, 00:12:46.204 "memory_domains": [ 00:12:46.204 { 00:12:46.204 "dma_device_id": "system", 00:12:46.204 "dma_device_type": 1 00:12:46.204 }, 00:12:46.204 { 00:12:46.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.204 "dma_device_type": 2 00:12:46.204 } 00:12:46.204 ], 00:12:46.204 "driver_specific": {} 00:12:46.204 } 00:12:46.204 ] 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.204 [2024-10-30 10:41:07.507553] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.204 [2024-10-30 10:41:07.507612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.204 [2024-10-30 10:41:07.507645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.204 [2024-10-30 10:41:07.510035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.204 "name": "Existed_Raid", 00:12:46.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.204 "strip_size_kb": 64, 00:12:46.204 "state": "configuring", 00:12:46.204 "raid_level": "raid0", 00:12:46.204 "superblock": false, 00:12:46.204 "num_base_bdevs": 3, 00:12:46.204 "num_base_bdevs_discovered": 2, 00:12:46.204 "num_base_bdevs_operational": 3, 00:12:46.204 "base_bdevs_list": [ 00:12:46.204 { 00:12:46.204 "name": "BaseBdev1", 00:12:46.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.204 "is_configured": false, 00:12:46.204 "data_offset": 0, 00:12:46.204 "data_size": 0 00:12:46.204 }, 00:12:46.204 { 00:12:46.204 "name": "BaseBdev2", 00:12:46.204 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:46.204 "is_configured": true, 00:12:46.204 "data_offset": 0, 00:12:46.204 "data_size": 65536 00:12:46.204 }, 00:12:46.204 { 00:12:46.204 "name": "BaseBdev3", 00:12:46.204 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:46.204 "is_configured": true, 00:12:46.204 "data_offset": 0, 00:12:46.204 "data_size": 65536 00:12:46.204 } 00:12:46.204 ] 00:12:46.204 }' 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.204 10:41:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.773 [2024-10-30 10:41:08.083711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.773 "name": "Existed_Raid", 00:12:46.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.773 "strip_size_kb": 64, 00:12:46.773 "state": "configuring", 00:12:46.773 "raid_level": "raid0", 00:12:46.773 "superblock": false, 00:12:46.773 "num_base_bdevs": 3, 00:12:46.773 "num_base_bdevs_discovered": 1, 00:12:46.773 "num_base_bdevs_operational": 3, 00:12:46.773 "base_bdevs_list": [ 00:12:46.773 { 00:12:46.773 "name": "BaseBdev1", 00:12:46.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.773 "is_configured": false, 00:12:46.773 "data_offset": 0, 00:12:46.773 "data_size": 0 00:12:46.773 }, 00:12:46.773 { 00:12:46.773 "name": null, 00:12:46.773 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:46.773 "is_configured": false, 00:12:46.773 "data_offset": 0, 00:12:46.773 "data_size": 65536 00:12:46.773 }, 00:12:46.773 { 00:12:46.773 "name": "BaseBdev3", 00:12:46.773 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:46.773 "is_configured": true, 00:12:46.773 "data_offset": 0, 00:12:46.773 "data_size": 65536 00:12:46.773 } 00:12:46.773 ] 00:12:46.773 }' 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.773 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.341 [2024-10-30 10:41:08.710010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.341 BaseBdev1 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.341 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.341 [ 00:12:47.341 { 00:12:47.341 "name": "BaseBdev1", 00:12:47.341 "aliases": [ 00:12:47.341 "0d6e8145-91f7-492c-bbdf-832e8be0b887" 00:12:47.341 ], 00:12:47.341 "product_name": "Malloc disk", 00:12:47.341 "block_size": 512, 00:12:47.341 "num_blocks": 65536, 00:12:47.341 "uuid": "0d6e8145-91f7-492c-bbdf-832e8be0b887", 00:12:47.341 "assigned_rate_limits": { 00:12:47.341 "rw_ios_per_sec": 0, 00:12:47.341 "rw_mbytes_per_sec": 0, 00:12:47.341 "r_mbytes_per_sec": 0, 00:12:47.341 "w_mbytes_per_sec": 0 00:12:47.341 }, 00:12:47.341 "claimed": true, 00:12:47.341 "claim_type": "exclusive_write", 00:12:47.341 "zoned": false, 00:12:47.341 "supported_io_types": { 00:12:47.341 "read": true, 00:12:47.341 "write": true, 00:12:47.341 "unmap": true, 00:12:47.341 "flush": true, 00:12:47.341 "reset": true, 00:12:47.341 "nvme_admin": false, 00:12:47.341 "nvme_io": false, 00:12:47.341 "nvme_io_md": false, 00:12:47.341 "write_zeroes": true, 00:12:47.341 "zcopy": true, 00:12:47.341 "get_zone_info": false, 00:12:47.341 "zone_management": false, 00:12:47.341 "zone_append": false, 00:12:47.341 "compare": false, 00:12:47.341 "compare_and_write": false, 00:12:47.341 "abort": true, 00:12:47.341 "seek_hole": false, 00:12:47.341 "seek_data": false, 00:12:47.341 "copy": true, 00:12:47.341 "nvme_iov_md": false 00:12:47.341 }, 00:12:47.341 "memory_domains": [ 00:12:47.341 { 00:12:47.341 "dma_device_id": "system", 00:12:47.341 "dma_device_type": 1 00:12:47.341 }, 00:12:47.341 { 00:12:47.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.341 "dma_device_type": 2 00:12:47.341 } 00:12:47.341 ], 00:12:47.341 "driver_specific": {} 00:12:47.341 } 00:12:47.341 ] 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.342 "name": "Existed_Raid", 00:12:47.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.342 "strip_size_kb": 64, 00:12:47.342 "state": "configuring", 00:12:47.342 "raid_level": "raid0", 00:12:47.342 "superblock": false, 00:12:47.342 "num_base_bdevs": 3, 00:12:47.342 "num_base_bdevs_discovered": 2, 00:12:47.342 "num_base_bdevs_operational": 3, 00:12:47.342 "base_bdevs_list": [ 00:12:47.342 { 00:12:47.342 "name": "BaseBdev1", 00:12:47.342 "uuid": "0d6e8145-91f7-492c-bbdf-832e8be0b887", 00:12:47.342 "is_configured": true, 00:12:47.342 "data_offset": 0, 00:12:47.342 "data_size": 65536 00:12:47.342 }, 00:12:47.342 { 00:12:47.342 "name": null, 00:12:47.342 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:47.342 "is_configured": false, 00:12:47.342 "data_offset": 0, 00:12:47.342 "data_size": 65536 00:12:47.342 }, 00:12:47.342 { 00:12:47.342 "name": "BaseBdev3", 00:12:47.342 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:47.342 "is_configured": true, 00:12:47.342 "data_offset": 0, 00:12:47.342 "data_size": 65536 00:12:47.342 } 00:12:47.342 ] 00:12:47.342 }' 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.342 10:41:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.944 [2024-10-30 10:41:09.362176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.944 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.203 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.203 "name": "Existed_Raid", 00:12:48.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.203 "strip_size_kb": 64, 00:12:48.203 "state": "configuring", 00:12:48.203 "raid_level": "raid0", 00:12:48.203 "superblock": false, 00:12:48.203 "num_base_bdevs": 3, 00:12:48.203 "num_base_bdevs_discovered": 1, 00:12:48.203 "num_base_bdevs_operational": 3, 00:12:48.203 "base_bdevs_list": [ 00:12:48.203 { 00:12:48.203 "name": "BaseBdev1", 00:12:48.203 "uuid": "0d6e8145-91f7-492c-bbdf-832e8be0b887", 00:12:48.203 "is_configured": true, 00:12:48.203 "data_offset": 0, 00:12:48.203 "data_size": 65536 00:12:48.203 }, 00:12:48.203 { 00:12:48.203 "name": null, 00:12:48.203 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:48.203 "is_configured": false, 00:12:48.203 "data_offset": 0, 00:12:48.203 "data_size": 65536 00:12:48.203 }, 00:12:48.203 { 00:12:48.203 "name": null, 00:12:48.203 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:48.203 "is_configured": false, 00:12:48.203 "data_offset": 0, 00:12:48.203 "data_size": 65536 00:12:48.203 } 00:12:48.203 ] 00:12:48.203 }' 00:12:48.203 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.203 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.461 [2024-10-30 10:41:09.910353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.461 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.462 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.719 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.719 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.746 "name": "Existed_Raid", 00:12:48.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.746 "strip_size_kb": 64, 00:12:48.746 "state": "configuring", 00:12:48.746 "raid_level": "raid0", 00:12:48.746 "superblock": false, 00:12:48.746 "num_base_bdevs": 3, 00:12:48.746 "num_base_bdevs_discovered": 2, 00:12:48.747 "num_base_bdevs_operational": 3, 00:12:48.747 "base_bdevs_list": [ 00:12:48.747 { 00:12:48.747 "name": "BaseBdev1", 00:12:48.747 "uuid": "0d6e8145-91f7-492c-bbdf-832e8be0b887", 00:12:48.747 "is_configured": true, 00:12:48.747 "data_offset": 0, 00:12:48.747 "data_size": 65536 00:12:48.747 }, 00:12:48.747 { 00:12:48.747 "name": null, 00:12:48.747 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:48.747 "is_configured": false, 00:12:48.747 "data_offset": 0, 00:12:48.747 "data_size": 65536 00:12:48.747 }, 00:12:48.747 { 00:12:48.747 "name": "BaseBdev3", 00:12:48.747 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:48.747 "is_configured": true, 00:12:48.747 "data_offset": 0, 00:12:48.747 "data_size": 65536 00:12:48.747 } 00:12:48.747 ] 00:12:48.747 }' 00:12:48.747 10:41:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.747 10:41:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.005 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.005 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.005 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.005 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.005 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.005 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:49.005 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.005 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.005 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.005 [2024-10-30 10:41:10.474520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.264 "name": "Existed_Raid", 00:12:49.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.264 "strip_size_kb": 64, 00:12:49.264 "state": "configuring", 00:12:49.264 "raid_level": "raid0", 00:12:49.264 "superblock": false, 00:12:49.264 "num_base_bdevs": 3, 00:12:49.264 "num_base_bdevs_discovered": 1, 00:12:49.264 "num_base_bdevs_operational": 3, 00:12:49.264 "base_bdevs_list": [ 00:12:49.264 { 00:12:49.264 "name": null, 00:12:49.264 "uuid": "0d6e8145-91f7-492c-bbdf-832e8be0b887", 00:12:49.264 "is_configured": false, 00:12:49.264 "data_offset": 0, 00:12:49.264 "data_size": 65536 00:12:49.264 }, 00:12:49.264 { 00:12:49.264 "name": null, 00:12:49.264 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:49.264 "is_configured": false, 00:12:49.264 "data_offset": 0, 00:12:49.264 "data_size": 65536 00:12:49.264 }, 00:12:49.264 { 00:12:49.264 "name": "BaseBdev3", 00:12:49.264 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:49.264 "is_configured": true, 00:12:49.264 "data_offset": 0, 00:12:49.264 "data_size": 65536 00:12:49.264 } 00:12:49.264 ] 00:12:49.264 }' 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.264 10:41:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.830 [2024-10-30 10:41:11.126619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.830 "name": "Existed_Raid", 00:12:49.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.830 "strip_size_kb": 64, 00:12:49.830 "state": "configuring", 00:12:49.830 "raid_level": "raid0", 00:12:49.830 "superblock": false, 00:12:49.830 "num_base_bdevs": 3, 00:12:49.830 "num_base_bdevs_discovered": 2, 00:12:49.830 "num_base_bdevs_operational": 3, 00:12:49.830 "base_bdevs_list": [ 00:12:49.830 { 00:12:49.830 "name": null, 00:12:49.830 "uuid": "0d6e8145-91f7-492c-bbdf-832e8be0b887", 00:12:49.830 "is_configured": false, 00:12:49.830 "data_offset": 0, 00:12:49.830 "data_size": 65536 00:12:49.830 }, 00:12:49.830 { 00:12:49.830 "name": "BaseBdev2", 00:12:49.830 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:49.830 "is_configured": true, 00:12:49.830 "data_offset": 0, 00:12:49.830 "data_size": 65536 00:12:49.830 }, 00:12:49.830 { 00:12:49.830 "name": "BaseBdev3", 00:12:49.830 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:49.830 "is_configured": true, 00:12:49.830 "data_offset": 0, 00:12:49.830 "data_size": 65536 00:12:49.830 } 00:12:49.830 ] 00:12:49.830 }' 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.830 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0d6e8145-91f7-492c-bbdf-832e8be0b887 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.396 [2024-10-30 10:41:11.804467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:50.396 [2024-10-30 10:41:11.804530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:50.396 [2024-10-30 10:41:11.804546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:50.396 [2024-10-30 10:41:11.804851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:50.396 [2024-10-30 10:41:11.805118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:50.396 [2024-10-30 10:41:11.805136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:50.396 [2024-10-30 10:41:11.805453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.396 NewBaseBdev 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.396 [ 00:12:50.396 { 00:12:50.396 "name": "NewBaseBdev", 00:12:50.396 "aliases": [ 00:12:50.396 "0d6e8145-91f7-492c-bbdf-832e8be0b887" 00:12:50.396 ], 00:12:50.396 "product_name": "Malloc disk", 00:12:50.396 "block_size": 512, 00:12:50.396 "num_blocks": 65536, 00:12:50.396 "uuid": "0d6e8145-91f7-492c-bbdf-832e8be0b887", 00:12:50.396 "assigned_rate_limits": { 00:12:50.396 "rw_ios_per_sec": 0, 00:12:50.396 "rw_mbytes_per_sec": 0, 00:12:50.396 "r_mbytes_per_sec": 0, 00:12:50.396 "w_mbytes_per_sec": 0 00:12:50.396 }, 00:12:50.396 "claimed": true, 00:12:50.396 "claim_type": "exclusive_write", 00:12:50.396 "zoned": false, 00:12:50.396 "supported_io_types": { 00:12:50.396 "read": true, 00:12:50.396 "write": true, 00:12:50.396 "unmap": true, 00:12:50.396 "flush": true, 00:12:50.396 "reset": true, 00:12:50.396 "nvme_admin": false, 00:12:50.396 "nvme_io": false, 00:12:50.396 "nvme_io_md": false, 00:12:50.396 "write_zeroes": true, 00:12:50.396 "zcopy": true, 00:12:50.396 "get_zone_info": false, 00:12:50.396 "zone_management": false, 00:12:50.396 "zone_append": false, 00:12:50.396 "compare": false, 00:12:50.396 "compare_and_write": false, 00:12:50.396 "abort": true, 00:12:50.396 "seek_hole": false, 00:12:50.396 "seek_data": false, 00:12:50.396 "copy": true, 00:12:50.396 "nvme_iov_md": false 00:12:50.396 }, 00:12:50.396 "memory_domains": [ 00:12:50.396 { 00:12:50.396 "dma_device_id": "system", 00:12:50.396 "dma_device_type": 1 00:12:50.396 }, 00:12:50.396 { 00:12:50.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.396 "dma_device_type": 2 00:12:50.396 } 00:12:50.396 ], 00:12:50.396 "driver_specific": {} 00:12:50.396 } 00:12:50.396 ] 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.396 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.654 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.654 "name": "Existed_Raid", 00:12:50.654 "uuid": "5ed983d6-c6c5-478b-bb98-ef812f72a026", 00:12:50.654 "strip_size_kb": 64, 00:12:50.654 "state": "online", 00:12:50.654 "raid_level": "raid0", 00:12:50.654 "superblock": false, 00:12:50.654 "num_base_bdevs": 3, 00:12:50.654 "num_base_bdevs_discovered": 3, 00:12:50.654 "num_base_bdevs_operational": 3, 00:12:50.654 "base_bdevs_list": [ 00:12:50.654 { 00:12:50.654 "name": "NewBaseBdev", 00:12:50.654 "uuid": "0d6e8145-91f7-492c-bbdf-832e8be0b887", 00:12:50.654 "is_configured": true, 00:12:50.654 "data_offset": 0, 00:12:50.654 "data_size": 65536 00:12:50.654 }, 00:12:50.654 { 00:12:50.654 "name": "BaseBdev2", 00:12:50.654 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:50.654 "is_configured": true, 00:12:50.654 "data_offset": 0, 00:12:50.654 "data_size": 65536 00:12:50.654 }, 00:12:50.654 { 00:12:50.654 "name": "BaseBdev3", 00:12:50.654 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:50.654 "is_configured": true, 00:12:50.654 "data_offset": 0, 00:12:50.654 "data_size": 65536 00:12:50.654 } 00:12:50.654 ] 00:12:50.654 }' 00:12:50.654 10:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.654 10:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:50.913 [2024-10-30 10:41:12.361183] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.913 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.172 "name": "Existed_Raid", 00:12:51.172 "aliases": [ 00:12:51.172 "5ed983d6-c6c5-478b-bb98-ef812f72a026" 00:12:51.172 ], 00:12:51.172 "product_name": "Raid Volume", 00:12:51.172 "block_size": 512, 00:12:51.172 "num_blocks": 196608, 00:12:51.172 "uuid": "5ed983d6-c6c5-478b-bb98-ef812f72a026", 00:12:51.172 "assigned_rate_limits": { 00:12:51.172 "rw_ios_per_sec": 0, 00:12:51.172 "rw_mbytes_per_sec": 0, 00:12:51.172 "r_mbytes_per_sec": 0, 00:12:51.172 "w_mbytes_per_sec": 0 00:12:51.172 }, 00:12:51.172 "claimed": false, 00:12:51.172 "zoned": false, 00:12:51.172 "supported_io_types": { 00:12:51.172 "read": true, 00:12:51.172 "write": true, 00:12:51.172 "unmap": true, 00:12:51.172 "flush": true, 00:12:51.172 "reset": true, 00:12:51.172 "nvme_admin": false, 00:12:51.172 "nvme_io": false, 00:12:51.172 "nvme_io_md": false, 00:12:51.172 "write_zeroes": true, 00:12:51.172 "zcopy": false, 00:12:51.172 "get_zone_info": false, 00:12:51.172 "zone_management": false, 00:12:51.172 "zone_append": false, 00:12:51.172 "compare": false, 00:12:51.172 "compare_and_write": false, 00:12:51.172 "abort": false, 00:12:51.172 "seek_hole": false, 00:12:51.172 "seek_data": false, 00:12:51.172 "copy": false, 00:12:51.172 "nvme_iov_md": false 00:12:51.172 }, 00:12:51.172 "memory_domains": [ 00:12:51.172 { 00:12:51.172 "dma_device_id": "system", 00:12:51.172 "dma_device_type": 1 00:12:51.172 }, 00:12:51.172 { 00:12:51.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.172 "dma_device_type": 2 00:12:51.172 }, 00:12:51.172 { 00:12:51.172 "dma_device_id": "system", 00:12:51.172 "dma_device_type": 1 00:12:51.172 }, 00:12:51.172 { 00:12:51.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.172 "dma_device_type": 2 00:12:51.172 }, 00:12:51.172 { 00:12:51.172 "dma_device_id": "system", 00:12:51.172 "dma_device_type": 1 00:12:51.172 }, 00:12:51.172 { 00:12:51.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.172 "dma_device_type": 2 00:12:51.172 } 00:12:51.172 ], 00:12:51.172 "driver_specific": { 00:12:51.172 "raid": { 00:12:51.172 "uuid": "5ed983d6-c6c5-478b-bb98-ef812f72a026", 00:12:51.172 "strip_size_kb": 64, 00:12:51.172 "state": "online", 00:12:51.172 "raid_level": "raid0", 00:12:51.172 "superblock": false, 00:12:51.172 "num_base_bdevs": 3, 00:12:51.172 "num_base_bdevs_discovered": 3, 00:12:51.172 "num_base_bdevs_operational": 3, 00:12:51.172 "base_bdevs_list": [ 00:12:51.172 { 00:12:51.172 "name": "NewBaseBdev", 00:12:51.172 "uuid": "0d6e8145-91f7-492c-bbdf-832e8be0b887", 00:12:51.172 "is_configured": true, 00:12:51.172 "data_offset": 0, 00:12:51.172 "data_size": 65536 00:12:51.172 }, 00:12:51.172 { 00:12:51.172 "name": "BaseBdev2", 00:12:51.172 "uuid": "e1f14d0d-93f8-4494-8ec1-8c5ca2578081", 00:12:51.172 "is_configured": true, 00:12:51.172 "data_offset": 0, 00:12:51.172 "data_size": 65536 00:12:51.172 }, 00:12:51.172 { 00:12:51.172 "name": "BaseBdev3", 00:12:51.172 "uuid": "984a08e0-2d91-4f14-97a6-58cdb4e25421", 00:12:51.172 "is_configured": true, 00:12:51.172 "data_offset": 0, 00:12:51.172 "data_size": 65536 00:12:51.172 } 00:12:51.172 ] 00:12:51.172 } 00:12:51.172 } 00:12:51.172 }' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:51.172 BaseBdev2 00:12:51.172 BaseBdev3' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.172 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.431 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.431 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.431 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.431 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.431 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.431 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.431 [2024-10-30 10:41:12.692837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.431 [2024-10-30 10:41:12.692873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.431 [2024-10-30 10:41:12.693002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.431 [2024-10-30 10:41:12.693081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.431 [2024-10-30 10:41:12.693103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:51.431 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.431 10:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63981 00:12:51.431 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63981 ']' 00:12:51.432 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63981 00:12:51.432 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:51.432 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:51.432 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63981 00:12:51.432 killing process with pid 63981 00:12:51.432 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:51.432 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:51.432 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63981' 00:12:51.432 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63981 00:12:51.432 [2024-10-30 10:41:12.734061] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.432 10:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63981 00:12:51.691 [2024-10-30 10:41:13.001635] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:52.645 00:12:52.645 real 0m12.001s 00:12:52.645 user 0m20.075s 00:12:52.645 sys 0m1.564s 00:12:52.645 ************************************ 00:12:52.645 END TEST raid_state_function_test 00:12:52.645 ************************************ 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.645 10:41:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:52.645 10:41:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:52.645 10:41:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.645 10:41:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.645 ************************************ 00:12:52.645 START TEST raid_state_function_test_sb 00:12:52.645 ************************************ 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:52.645 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:52.645 Process raid pid: 64619 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64619 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64619' 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64619 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64619 ']' 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.646 10:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.904 [2024-10-30 10:41:14.183020] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:12:52.904 [2024-10-30 10:41:14.183438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.163 [2024-10-30 10:41:14.374528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.163 [2024-10-30 10:41:14.534342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.420 [2024-10-30 10:41:14.738271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.420 [2024-10-30 10:41:14.738337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.986 [2024-10-30 10:41:15.152532] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.986 [2024-10-30 10:41:15.152773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.986 [2024-10-30 10:41:15.152802] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.986 [2024-10-30 10:41:15.152820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.986 [2024-10-30 10:41:15.152830] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:53.986 [2024-10-30 10:41:15.152844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.986 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.987 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.987 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.987 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.987 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.987 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.987 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.987 "name": "Existed_Raid", 00:12:53.987 "uuid": "3526a5fd-fa4f-496f-8c26-a71d597c3ca4", 00:12:53.987 "strip_size_kb": 64, 00:12:53.987 "state": "configuring", 00:12:53.987 "raid_level": "raid0", 00:12:53.987 "superblock": true, 00:12:53.987 "num_base_bdevs": 3, 00:12:53.987 "num_base_bdevs_discovered": 0, 00:12:53.987 "num_base_bdevs_operational": 3, 00:12:53.987 "base_bdevs_list": [ 00:12:53.987 { 00:12:53.987 "name": "BaseBdev1", 00:12:53.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.987 "is_configured": false, 00:12:53.987 "data_offset": 0, 00:12:53.987 "data_size": 0 00:12:53.987 }, 00:12:53.987 { 00:12:53.987 "name": "BaseBdev2", 00:12:53.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.987 "is_configured": false, 00:12:53.987 "data_offset": 0, 00:12:53.987 "data_size": 0 00:12:53.987 }, 00:12:53.987 { 00:12:53.987 "name": "BaseBdev3", 00:12:53.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.987 "is_configured": false, 00:12:53.987 "data_offset": 0, 00:12:53.987 "data_size": 0 00:12:53.987 } 00:12:53.987 ] 00:12:53.987 }' 00:12:53.987 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.987 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.246 [2024-10-30 10:41:15.676649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.246 [2024-10-30 10:41:15.676872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.246 [2024-10-30 10:41:15.684624] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.246 [2024-10-30 10:41:15.684693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.246 [2024-10-30 10:41:15.684708] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.246 [2024-10-30 10:41:15.684723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.246 [2024-10-30 10:41:15.684731] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:54.246 [2024-10-30 10:41:15.684744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.246 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.504 [2024-10-30 10:41:15.730424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.504 BaseBdev1 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.504 [ 00:12:54.504 { 00:12:54.504 "name": "BaseBdev1", 00:12:54.504 "aliases": [ 00:12:54.504 "d797283e-2537-4478-a402-1ae825de0cf8" 00:12:54.504 ], 00:12:54.504 "product_name": "Malloc disk", 00:12:54.504 "block_size": 512, 00:12:54.504 "num_blocks": 65536, 00:12:54.504 "uuid": "d797283e-2537-4478-a402-1ae825de0cf8", 00:12:54.504 "assigned_rate_limits": { 00:12:54.504 "rw_ios_per_sec": 0, 00:12:54.504 "rw_mbytes_per_sec": 0, 00:12:54.504 "r_mbytes_per_sec": 0, 00:12:54.504 "w_mbytes_per_sec": 0 00:12:54.504 }, 00:12:54.504 "claimed": true, 00:12:54.504 "claim_type": "exclusive_write", 00:12:54.504 "zoned": false, 00:12:54.504 "supported_io_types": { 00:12:54.504 "read": true, 00:12:54.504 "write": true, 00:12:54.504 "unmap": true, 00:12:54.504 "flush": true, 00:12:54.504 "reset": true, 00:12:54.504 "nvme_admin": false, 00:12:54.504 "nvme_io": false, 00:12:54.504 "nvme_io_md": false, 00:12:54.504 "write_zeroes": true, 00:12:54.504 "zcopy": true, 00:12:54.504 "get_zone_info": false, 00:12:54.504 "zone_management": false, 00:12:54.504 "zone_append": false, 00:12:54.504 "compare": false, 00:12:54.504 "compare_and_write": false, 00:12:54.504 "abort": true, 00:12:54.504 "seek_hole": false, 00:12:54.504 "seek_data": false, 00:12:54.504 "copy": true, 00:12:54.504 "nvme_iov_md": false 00:12:54.504 }, 00:12:54.504 "memory_domains": [ 00:12:54.504 { 00:12:54.504 "dma_device_id": "system", 00:12:54.504 "dma_device_type": 1 00:12:54.504 }, 00:12:54.504 { 00:12:54.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.504 "dma_device_type": 2 00:12:54.504 } 00:12:54.504 ], 00:12:54.504 "driver_specific": {} 00:12:54.504 } 00:12:54.504 ] 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.504 "name": "Existed_Raid", 00:12:54.504 "uuid": "21531ba5-d15a-467a-b96b-4c6e69cafd14", 00:12:54.504 "strip_size_kb": 64, 00:12:54.504 "state": "configuring", 00:12:54.504 "raid_level": "raid0", 00:12:54.504 "superblock": true, 00:12:54.504 "num_base_bdevs": 3, 00:12:54.504 "num_base_bdevs_discovered": 1, 00:12:54.504 "num_base_bdevs_operational": 3, 00:12:54.504 "base_bdevs_list": [ 00:12:54.504 { 00:12:54.504 "name": "BaseBdev1", 00:12:54.504 "uuid": "d797283e-2537-4478-a402-1ae825de0cf8", 00:12:54.504 "is_configured": true, 00:12:54.504 "data_offset": 2048, 00:12:54.504 "data_size": 63488 00:12:54.504 }, 00:12:54.504 { 00:12:54.504 "name": "BaseBdev2", 00:12:54.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.504 "is_configured": false, 00:12:54.504 "data_offset": 0, 00:12:54.504 "data_size": 0 00:12:54.504 }, 00:12:54.504 { 00:12:54.504 "name": "BaseBdev3", 00:12:54.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.504 "is_configured": false, 00:12:54.504 "data_offset": 0, 00:12:54.504 "data_size": 0 00:12:54.504 } 00:12:54.504 ] 00:12:54.504 }' 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.504 10:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 [2024-10-30 10:41:16.294700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.071 [2024-10-30 10:41:16.294760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 [2024-10-30 10:41:16.302770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.071 [2024-10-30 10:41:16.305232] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.071 [2024-10-30 10:41:16.305435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.071 [2024-10-30 10:41:16.305463] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.071 [2024-10-30 10:41:16.305479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.071 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.072 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.072 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.072 "name": "Existed_Raid", 00:12:55.072 "uuid": "00cae6ab-fe2a-4fdf-a90d-9d84e8beb71e", 00:12:55.072 "strip_size_kb": 64, 00:12:55.072 "state": "configuring", 00:12:55.072 "raid_level": "raid0", 00:12:55.072 "superblock": true, 00:12:55.072 "num_base_bdevs": 3, 00:12:55.072 "num_base_bdevs_discovered": 1, 00:12:55.072 "num_base_bdevs_operational": 3, 00:12:55.072 "base_bdevs_list": [ 00:12:55.072 { 00:12:55.072 "name": "BaseBdev1", 00:12:55.072 "uuid": "d797283e-2537-4478-a402-1ae825de0cf8", 00:12:55.072 "is_configured": true, 00:12:55.072 "data_offset": 2048, 00:12:55.072 "data_size": 63488 00:12:55.072 }, 00:12:55.072 { 00:12:55.072 "name": "BaseBdev2", 00:12:55.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.072 "is_configured": false, 00:12:55.072 "data_offset": 0, 00:12:55.072 "data_size": 0 00:12:55.072 }, 00:12:55.072 { 00:12:55.072 "name": "BaseBdev3", 00:12:55.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.072 "is_configured": false, 00:12:55.072 "data_offset": 0, 00:12:55.072 "data_size": 0 00:12:55.072 } 00:12:55.072 ] 00:12:55.072 }' 00:12:55.072 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.072 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.639 [2024-10-30 10:41:16.895494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.639 BaseBdev2 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.639 [ 00:12:55.639 { 00:12:55.639 "name": "BaseBdev2", 00:12:55.639 "aliases": [ 00:12:55.639 "f75108a0-1b7b-4d02-8953-037d87bf555e" 00:12:55.639 ], 00:12:55.639 "product_name": "Malloc disk", 00:12:55.639 "block_size": 512, 00:12:55.639 "num_blocks": 65536, 00:12:55.639 "uuid": "f75108a0-1b7b-4d02-8953-037d87bf555e", 00:12:55.639 "assigned_rate_limits": { 00:12:55.639 "rw_ios_per_sec": 0, 00:12:55.639 "rw_mbytes_per_sec": 0, 00:12:55.639 "r_mbytes_per_sec": 0, 00:12:55.639 "w_mbytes_per_sec": 0 00:12:55.639 }, 00:12:55.639 "claimed": true, 00:12:55.639 "claim_type": "exclusive_write", 00:12:55.639 "zoned": false, 00:12:55.639 "supported_io_types": { 00:12:55.639 "read": true, 00:12:55.639 "write": true, 00:12:55.639 "unmap": true, 00:12:55.639 "flush": true, 00:12:55.639 "reset": true, 00:12:55.639 "nvme_admin": false, 00:12:55.639 "nvme_io": false, 00:12:55.639 "nvme_io_md": false, 00:12:55.639 "write_zeroes": true, 00:12:55.639 "zcopy": true, 00:12:55.639 "get_zone_info": false, 00:12:55.639 "zone_management": false, 00:12:55.639 "zone_append": false, 00:12:55.639 "compare": false, 00:12:55.639 "compare_and_write": false, 00:12:55.639 "abort": true, 00:12:55.639 "seek_hole": false, 00:12:55.639 "seek_data": false, 00:12:55.639 "copy": true, 00:12:55.639 "nvme_iov_md": false 00:12:55.639 }, 00:12:55.639 "memory_domains": [ 00:12:55.639 { 00:12:55.639 "dma_device_id": "system", 00:12:55.639 "dma_device_type": 1 00:12:55.639 }, 00:12:55.639 { 00:12:55.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.639 "dma_device_type": 2 00:12:55.639 } 00:12:55.639 ], 00:12:55.639 "driver_specific": {} 00:12:55.639 } 00:12:55.639 ] 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.639 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.640 "name": "Existed_Raid", 00:12:55.640 "uuid": "00cae6ab-fe2a-4fdf-a90d-9d84e8beb71e", 00:12:55.640 "strip_size_kb": 64, 00:12:55.640 "state": "configuring", 00:12:55.640 "raid_level": "raid0", 00:12:55.640 "superblock": true, 00:12:55.640 "num_base_bdevs": 3, 00:12:55.640 "num_base_bdevs_discovered": 2, 00:12:55.640 "num_base_bdevs_operational": 3, 00:12:55.640 "base_bdevs_list": [ 00:12:55.640 { 00:12:55.640 "name": "BaseBdev1", 00:12:55.640 "uuid": "d797283e-2537-4478-a402-1ae825de0cf8", 00:12:55.640 "is_configured": true, 00:12:55.640 "data_offset": 2048, 00:12:55.640 "data_size": 63488 00:12:55.640 }, 00:12:55.640 { 00:12:55.640 "name": "BaseBdev2", 00:12:55.640 "uuid": "f75108a0-1b7b-4d02-8953-037d87bf555e", 00:12:55.640 "is_configured": true, 00:12:55.640 "data_offset": 2048, 00:12:55.640 "data_size": 63488 00:12:55.640 }, 00:12:55.640 { 00:12:55.640 "name": "BaseBdev3", 00:12:55.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.640 "is_configured": false, 00:12:55.640 "data_offset": 0, 00:12:55.640 "data_size": 0 00:12:55.640 } 00:12:55.640 ] 00:12:55.640 }' 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.640 10:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.208 [2024-10-30 10:41:17.533837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.208 [2024-10-30 10:41:17.534182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:56.208 [2024-10-30 10:41:17.534214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:56.208 [2024-10-30 10:41:17.534591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:56.208 BaseBdev3 00:12:56.208 [2024-10-30 10:41:17.534804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:56.208 [2024-10-30 10:41:17.534826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:56.208 [2024-10-30 10:41:17.535038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.208 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.208 [ 00:12:56.208 { 00:12:56.208 "name": "BaseBdev3", 00:12:56.208 "aliases": [ 00:12:56.208 "c0f20d9e-1464-4fae-af01-864424c9124a" 00:12:56.208 ], 00:12:56.208 "product_name": "Malloc disk", 00:12:56.208 "block_size": 512, 00:12:56.208 "num_blocks": 65536, 00:12:56.208 "uuid": "c0f20d9e-1464-4fae-af01-864424c9124a", 00:12:56.208 "assigned_rate_limits": { 00:12:56.208 "rw_ios_per_sec": 0, 00:12:56.208 "rw_mbytes_per_sec": 0, 00:12:56.208 "r_mbytes_per_sec": 0, 00:12:56.208 "w_mbytes_per_sec": 0 00:12:56.208 }, 00:12:56.208 "claimed": true, 00:12:56.208 "claim_type": "exclusive_write", 00:12:56.208 "zoned": false, 00:12:56.208 "supported_io_types": { 00:12:56.208 "read": true, 00:12:56.209 "write": true, 00:12:56.209 "unmap": true, 00:12:56.209 "flush": true, 00:12:56.209 "reset": true, 00:12:56.209 "nvme_admin": false, 00:12:56.209 "nvme_io": false, 00:12:56.209 "nvme_io_md": false, 00:12:56.209 "write_zeroes": true, 00:12:56.209 "zcopy": true, 00:12:56.209 "get_zone_info": false, 00:12:56.209 "zone_management": false, 00:12:56.209 "zone_append": false, 00:12:56.209 "compare": false, 00:12:56.209 "compare_and_write": false, 00:12:56.209 "abort": true, 00:12:56.209 "seek_hole": false, 00:12:56.209 "seek_data": false, 00:12:56.209 "copy": true, 00:12:56.209 "nvme_iov_md": false 00:12:56.209 }, 00:12:56.209 "memory_domains": [ 00:12:56.209 { 00:12:56.209 "dma_device_id": "system", 00:12:56.209 "dma_device_type": 1 00:12:56.209 }, 00:12:56.209 { 00:12:56.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.209 "dma_device_type": 2 00:12:56.209 } 00:12:56.209 ], 00:12:56.209 "driver_specific": {} 00:12:56.209 } 00:12:56.209 ] 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.209 "name": "Existed_Raid", 00:12:56.209 "uuid": "00cae6ab-fe2a-4fdf-a90d-9d84e8beb71e", 00:12:56.209 "strip_size_kb": 64, 00:12:56.209 "state": "online", 00:12:56.209 "raid_level": "raid0", 00:12:56.209 "superblock": true, 00:12:56.209 "num_base_bdevs": 3, 00:12:56.209 "num_base_bdevs_discovered": 3, 00:12:56.209 "num_base_bdevs_operational": 3, 00:12:56.209 "base_bdevs_list": [ 00:12:56.209 { 00:12:56.209 "name": "BaseBdev1", 00:12:56.209 "uuid": "d797283e-2537-4478-a402-1ae825de0cf8", 00:12:56.209 "is_configured": true, 00:12:56.209 "data_offset": 2048, 00:12:56.209 "data_size": 63488 00:12:56.209 }, 00:12:56.209 { 00:12:56.209 "name": "BaseBdev2", 00:12:56.209 "uuid": "f75108a0-1b7b-4d02-8953-037d87bf555e", 00:12:56.209 "is_configured": true, 00:12:56.209 "data_offset": 2048, 00:12:56.209 "data_size": 63488 00:12:56.209 }, 00:12:56.209 { 00:12:56.209 "name": "BaseBdev3", 00:12:56.209 "uuid": "c0f20d9e-1464-4fae-af01-864424c9124a", 00:12:56.209 "is_configured": true, 00:12:56.209 "data_offset": 2048, 00:12:56.209 "data_size": 63488 00:12:56.209 } 00:12:56.209 ] 00:12:56.209 }' 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.209 10:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.795 [2024-10-30 10:41:18.102511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.795 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:56.795 "name": "Existed_Raid", 00:12:56.795 "aliases": [ 00:12:56.795 "00cae6ab-fe2a-4fdf-a90d-9d84e8beb71e" 00:12:56.795 ], 00:12:56.795 "product_name": "Raid Volume", 00:12:56.795 "block_size": 512, 00:12:56.795 "num_blocks": 190464, 00:12:56.795 "uuid": "00cae6ab-fe2a-4fdf-a90d-9d84e8beb71e", 00:12:56.795 "assigned_rate_limits": { 00:12:56.795 "rw_ios_per_sec": 0, 00:12:56.795 "rw_mbytes_per_sec": 0, 00:12:56.795 "r_mbytes_per_sec": 0, 00:12:56.795 "w_mbytes_per_sec": 0 00:12:56.795 }, 00:12:56.795 "claimed": false, 00:12:56.795 "zoned": false, 00:12:56.795 "supported_io_types": { 00:12:56.795 "read": true, 00:12:56.795 "write": true, 00:12:56.795 "unmap": true, 00:12:56.795 "flush": true, 00:12:56.795 "reset": true, 00:12:56.795 "nvme_admin": false, 00:12:56.795 "nvme_io": false, 00:12:56.795 "nvme_io_md": false, 00:12:56.795 "write_zeroes": true, 00:12:56.795 "zcopy": false, 00:12:56.795 "get_zone_info": false, 00:12:56.795 "zone_management": false, 00:12:56.795 "zone_append": false, 00:12:56.795 "compare": false, 00:12:56.795 "compare_and_write": false, 00:12:56.795 "abort": false, 00:12:56.795 "seek_hole": false, 00:12:56.795 "seek_data": false, 00:12:56.795 "copy": false, 00:12:56.795 "nvme_iov_md": false 00:12:56.795 }, 00:12:56.795 "memory_domains": [ 00:12:56.795 { 00:12:56.795 "dma_device_id": "system", 00:12:56.795 "dma_device_type": 1 00:12:56.795 }, 00:12:56.795 { 00:12:56.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.795 "dma_device_type": 2 00:12:56.795 }, 00:12:56.795 { 00:12:56.795 "dma_device_id": "system", 00:12:56.795 "dma_device_type": 1 00:12:56.795 }, 00:12:56.795 { 00:12:56.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.795 "dma_device_type": 2 00:12:56.795 }, 00:12:56.795 { 00:12:56.795 "dma_device_id": "system", 00:12:56.795 "dma_device_type": 1 00:12:56.795 }, 00:12:56.795 { 00:12:56.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.795 "dma_device_type": 2 00:12:56.795 } 00:12:56.795 ], 00:12:56.795 "driver_specific": { 00:12:56.795 "raid": { 00:12:56.795 "uuid": "00cae6ab-fe2a-4fdf-a90d-9d84e8beb71e", 00:12:56.795 "strip_size_kb": 64, 00:12:56.795 "state": "online", 00:12:56.795 "raid_level": "raid0", 00:12:56.795 "superblock": true, 00:12:56.795 "num_base_bdevs": 3, 00:12:56.795 "num_base_bdevs_discovered": 3, 00:12:56.795 "num_base_bdevs_operational": 3, 00:12:56.795 "base_bdevs_list": [ 00:12:56.795 { 00:12:56.795 "name": "BaseBdev1", 00:12:56.795 "uuid": "d797283e-2537-4478-a402-1ae825de0cf8", 00:12:56.795 "is_configured": true, 00:12:56.795 "data_offset": 2048, 00:12:56.795 "data_size": 63488 00:12:56.795 }, 00:12:56.795 { 00:12:56.795 "name": "BaseBdev2", 00:12:56.795 "uuid": "f75108a0-1b7b-4d02-8953-037d87bf555e", 00:12:56.795 "is_configured": true, 00:12:56.795 "data_offset": 2048, 00:12:56.795 "data_size": 63488 00:12:56.795 }, 00:12:56.795 { 00:12:56.795 "name": "BaseBdev3", 00:12:56.795 "uuid": "c0f20d9e-1464-4fae-af01-864424c9124a", 00:12:56.795 "is_configured": true, 00:12:56.795 "data_offset": 2048, 00:12:56.796 "data_size": 63488 00:12:56.796 } 00:12:56.796 ] 00:12:56.796 } 00:12:56.796 } 00:12:56.796 }' 00:12:56.796 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.796 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:56.796 BaseBdev2 00:12:56.796 BaseBdev3' 00:12:56.796 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.152 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.152 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.152 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:57.152 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.153 [2024-10-30 10:41:18.414249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.153 [2024-10-30 10:41:18.414485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.153 [2024-10-30 10:41:18.414577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.153 "name": "Existed_Raid", 00:12:57.153 "uuid": "00cae6ab-fe2a-4fdf-a90d-9d84e8beb71e", 00:12:57.153 "strip_size_kb": 64, 00:12:57.153 "state": "offline", 00:12:57.153 "raid_level": "raid0", 00:12:57.153 "superblock": true, 00:12:57.153 "num_base_bdevs": 3, 00:12:57.153 "num_base_bdevs_discovered": 2, 00:12:57.153 "num_base_bdevs_operational": 2, 00:12:57.153 "base_bdevs_list": [ 00:12:57.153 { 00:12:57.153 "name": null, 00:12:57.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.153 "is_configured": false, 00:12:57.153 "data_offset": 0, 00:12:57.153 "data_size": 63488 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "name": "BaseBdev2", 00:12:57.153 "uuid": "f75108a0-1b7b-4d02-8953-037d87bf555e", 00:12:57.153 "is_configured": true, 00:12:57.153 "data_offset": 2048, 00:12:57.153 "data_size": 63488 00:12:57.153 }, 00:12:57.153 { 00:12:57.153 "name": "BaseBdev3", 00:12:57.153 "uuid": "c0f20d9e-1464-4fae-af01-864424c9124a", 00:12:57.153 "is_configured": true, 00:12:57.153 "data_offset": 2048, 00:12:57.153 "data_size": 63488 00:12:57.153 } 00:12:57.153 ] 00:12:57.153 }' 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.153 10:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.721 [2024-10-30 10:41:19.065651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.721 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.980 [2024-10-30 10:41:19.208563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:57.980 [2024-10-30 10:41:19.208619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.980 BaseBdev2 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:57.980 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.981 [ 00:12:57.981 { 00:12:57.981 "name": "BaseBdev2", 00:12:57.981 "aliases": [ 00:12:57.981 "eaab090a-6878-4c70-8a17-b9e0b3a37c98" 00:12:57.981 ], 00:12:57.981 "product_name": "Malloc disk", 00:12:57.981 "block_size": 512, 00:12:57.981 "num_blocks": 65536, 00:12:57.981 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:12:57.981 "assigned_rate_limits": { 00:12:57.981 "rw_ios_per_sec": 0, 00:12:57.981 "rw_mbytes_per_sec": 0, 00:12:57.981 "r_mbytes_per_sec": 0, 00:12:57.981 "w_mbytes_per_sec": 0 00:12:57.981 }, 00:12:57.981 "claimed": false, 00:12:57.981 "zoned": false, 00:12:57.981 "supported_io_types": { 00:12:57.981 "read": true, 00:12:57.981 "write": true, 00:12:57.981 "unmap": true, 00:12:57.981 "flush": true, 00:12:57.981 "reset": true, 00:12:57.981 "nvme_admin": false, 00:12:57.981 "nvme_io": false, 00:12:57.981 "nvme_io_md": false, 00:12:57.981 "write_zeroes": true, 00:12:57.981 "zcopy": true, 00:12:57.981 "get_zone_info": false, 00:12:57.981 "zone_management": false, 00:12:57.981 "zone_append": false, 00:12:57.981 "compare": false, 00:12:57.981 "compare_and_write": false, 00:12:57.981 "abort": true, 00:12:57.981 "seek_hole": false, 00:12:57.981 "seek_data": false, 00:12:57.981 "copy": true, 00:12:57.981 "nvme_iov_md": false 00:12:57.981 }, 00:12:57.981 "memory_domains": [ 00:12:57.981 { 00:12:57.981 "dma_device_id": "system", 00:12:57.981 "dma_device_type": 1 00:12:57.981 }, 00:12:57.981 { 00:12:57.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.981 "dma_device_type": 2 00:12:57.981 } 00:12:57.981 ], 00:12:57.981 "driver_specific": {} 00:12:57.981 } 00:12:57.981 ] 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.981 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.239 BaseBdev3 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.239 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.239 [ 00:12:58.239 { 00:12:58.239 "name": "BaseBdev3", 00:12:58.239 "aliases": [ 00:12:58.239 "98e295cb-d02c-492e-b2d4-8cdfdfa222b4" 00:12:58.239 ], 00:12:58.239 "product_name": "Malloc disk", 00:12:58.239 "block_size": 512, 00:12:58.239 "num_blocks": 65536, 00:12:58.239 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:12:58.239 "assigned_rate_limits": { 00:12:58.239 "rw_ios_per_sec": 0, 00:12:58.239 "rw_mbytes_per_sec": 0, 00:12:58.239 "r_mbytes_per_sec": 0, 00:12:58.239 "w_mbytes_per_sec": 0 00:12:58.239 }, 00:12:58.239 "claimed": false, 00:12:58.239 "zoned": false, 00:12:58.239 "supported_io_types": { 00:12:58.239 "read": true, 00:12:58.239 "write": true, 00:12:58.239 "unmap": true, 00:12:58.239 "flush": true, 00:12:58.239 "reset": true, 00:12:58.239 "nvme_admin": false, 00:12:58.240 "nvme_io": false, 00:12:58.240 "nvme_io_md": false, 00:12:58.240 "write_zeroes": true, 00:12:58.240 "zcopy": true, 00:12:58.240 "get_zone_info": false, 00:12:58.240 "zone_management": false, 00:12:58.240 "zone_append": false, 00:12:58.240 "compare": false, 00:12:58.240 "compare_and_write": false, 00:12:58.240 "abort": true, 00:12:58.240 "seek_hole": false, 00:12:58.240 "seek_data": false, 00:12:58.240 "copy": true, 00:12:58.240 "nvme_iov_md": false 00:12:58.240 }, 00:12:58.240 "memory_domains": [ 00:12:58.240 { 00:12:58.240 "dma_device_id": "system", 00:12:58.240 "dma_device_type": 1 00:12:58.240 }, 00:12:58.240 { 00:12:58.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.240 "dma_device_type": 2 00:12:58.240 } 00:12:58.240 ], 00:12:58.240 "driver_specific": {} 00:12:58.240 } 00:12:58.240 ] 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.240 [2024-10-30 10:41:19.494779] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.240 [2024-10-30 10:41:19.494969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.240 [2024-10-30 10:41:19.495128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.240 [2024-10-30 10:41:19.497586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.240 "name": "Existed_Raid", 00:12:58.240 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:12:58.240 "strip_size_kb": 64, 00:12:58.240 "state": "configuring", 00:12:58.240 "raid_level": "raid0", 00:12:58.240 "superblock": true, 00:12:58.240 "num_base_bdevs": 3, 00:12:58.240 "num_base_bdevs_discovered": 2, 00:12:58.240 "num_base_bdevs_operational": 3, 00:12:58.240 "base_bdevs_list": [ 00:12:58.240 { 00:12:58.240 "name": "BaseBdev1", 00:12:58.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.240 "is_configured": false, 00:12:58.240 "data_offset": 0, 00:12:58.240 "data_size": 0 00:12:58.240 }, 00:12:58.240 { 00:12:58.240 "name": "BaseBdev2", 00:12:58.240 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:12:58.240 "is_configured": true, 00:12:58.240 "data_offset": 2048, 00:12:58.240 "data_size": 63488 00:12:58.240 }, 00:12:58.240 { 00:12:58.240 "name": "BaseBdev3", 00:12:58.240 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:12:58.240 "is_configured": true, 00:12:58.240 "data_offset": 2048, 00:12:58.240 "data_size": 63488 00:12:58.240 } 00:12:58.240 ] 00:12:58.240 }' 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.240 10:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.807 [2024-10-30 10:41:20.022910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.807 "name": "Existed_Raid", 00:12:58.807 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:12:58.807 "strip_size_kb": 64, 00:12:58.807 "state": "configuring", 00:12:58.807 "raid_level": "raid0", 00:12:58.807 "superblock": true, 00:12:58.807 "num_base_bdevs": 3, 00:12:58.807 "num_base_bdevs_discovered": 1, 00:12:58.807 "num_base_bdevs_operational": 3, 00:12:58.807 "base_bdevs_list": [ 00:12:58.807 { 00:12:58.807 "name": "BaseBdev1", 00:12:58.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.807 "is_configured": false, 00:12:58.807 "data_offset": 0, 00:12:58.807 "data_size": 0 00:12:58.807 }, 00:12:58.807 { 00:12:58.807 "name": null, 00:12:58.807 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:12:58.807 "is_configured": false, 00:12:58.807 "data_offset": 0, 00:12:58.807 "data_size": 63488 00:12:58.807 }, 00:12:58.807 { 00:12:58.807 "name": "BaseBdev3", 00:12:58.807 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:12:58.807 "is_configured": true, 00:12:58.807 "data_offset": 2048, 00:12:58.807 "data_size": 63488 00:12:58.807 } 00:12:58.807 ] 00:12:58.807 }' 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.807 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.067 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:59.067 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.067 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.067 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.326 [2024-10-30 10:41:20.622264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.326 BaseBdev1 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.326 [ 00:12:59.326 { 00:12:59.326 "name": "BaseBdev1", 00:12:59.326 "aliases": [ 00:12:59.326 "49a76d73-4555-4d0d-ae09-28016033d8ac" 00:12:59.326 ], 00:12:59.326 "product_name": "Malloc disk", 00:12:59.326 "block_size": 512, 00:12:59.326 "num_blocks": 65536, 00:12:59.326 "uuid": "49a76d73-4555-4d0d-ae09-28016033d8ac", 00:12:59.326 "assigned_rate_limits": { 00:12:59.326 "rw_ios_per_sec": 0, 00:12:59.326 "rw_mbytes_per_sec": 0, 00:12:59.326 "r_mbytes_per_sec": 0, 00:12:59.326 "w_mbytes_per_sec": 0 00:12:59.326 }, 00:12:59.326 "claimed": true, 00:12:59.326 "claim_type": "exclusive_write", 00:12:59.326 "zoned": false, 00:12:59.326 "supported_io_types": { 00:12:59.326 "read": true, 00:12:59.326 "write": true, 00:12:59.326 "unmap": true, 00:12:59.326 "flush": true, 00:12:59.326 "reset": true, 00:12:59.326 "nvme_admin": false, 00:12:59.326 "nvme_io": false, 00:12:59.326 "nvme_io_md": false, 00:12:59.326 "write_zeroes": true, 00:12:59.326 "zcopy": true, 00:12:59.326 "get_zone_info": false, 00:12:59.326 "zone_management": false, 00:12:59.326 "zone_append": false, 00:12:59.326 "compare": false, 00:12:59.326 "compare_and_write": false, 00:12:59.326 "abort": true, 00:12:59.326 "seek_hole": false, 00:12:59.326 "seek_data": false, 00:12:59.326 "copy": true, 00:12:59.326 "nvme_iov_md": false 00:12:59.326 }, 00:12:59.326 "memory_domains": [ 00:12:59.326 { 00:12:59.326 "dma_device_id": "system", 00:12:59.326 "dma_device_type": 1 00:12:59.326 }, 00:12:59.326 { 00:12:59.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.326 "dma_device_type": 2 00:12:59.326 } 00:12:59.326 ], 00:12:59.326 "driver_specific": {} 00:12:59.326 } 00:12:59.326 ] 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.326 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.326 "name": "Existed_Raid", 00:12:59.326 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:12:59.326 "strip_size_kb": 64, 00:12:59.326 "state": "configuring", 00:12:59.326 "raid_level": "raid0", 00:12:59.326 "superblock": true, 00:12:59.326 "num_base_bdevs": 3, 00:12:59.326 "num_base_bdevs_discovered": 2, 00:12:59.326 "num_base_bdevs_operational": 3, 00:12:59.326 "base_bdevs_list": [ 00:12:59.326 { 00:12:59.326 "name": "BaseBdev1", 00:12:59.326 "uuid": "49a76d73-4555-4d0d-ae09-28016033d8ac", 00:12:59.326 "is_configured": true, 00:12:59.326 "data_offset": 2048, 00:12:59.326 "data_size": 63488 00:12:59.326 }, 00:12:59.326 { 00:12:59.326 "name": null, 00:12:59.326 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:12:59.326 "is_configured": false, 00:12:59.326 "data_offset": 0, 00:12:59.326 "data_size": 63488 00:12:59.326 }, 00:12:59.326 { 00:12:59.326 "name": "BaseBdev3", 00:12:59.326 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:12:59.326 "is_configured": true, 00:12:59.326 "data_offset": 2048, 00:12:59.326 "data_size": 63488 00:12:59.326 } 00:12:59.326 ] 00:12:59.326 }' 00:12:59.327 10:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.327 10:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.894 [2024-10-30 10:41:21.214490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.894 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.895 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.895 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.895 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.895 "name": "Existed_Raid", 00:12:59.895 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:12:59.895 "strip_size_kb": 64, 00:12:59.895 "state": "configuring", 00:12:59.895 "raid_level": "raid0", 00:12:59.895 "superblock": true, 00:12:59.895 "num_base_bdevs": 3, 00:12:59.895 "num_base_bdevs_discovered": 1, 00:12:59.895 "num_base_bdevs_operational": 3, 00:12:59.895 "base_bdevs_list": [ 00:12:59.895 { 00:12:59.895 "name": "BaseBdev1", 00:12:59.895 "uuid": "49a76d73-4555-4d0d-ae09-28016033d8ac", 00:12:59.895 "is_configured": true, 00:12:59.895 "data_offset": 2048, 00:12:59.895 "data_size": 63488 00:12:59.895 }, 00:12:59.895 { 00:12:59.895 "name": null, 00:12:59.895 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:12:59.895 "is_configured": false, 00:12:59.895 "data_offset": 0, 00:12:59.895 "data_size": 63488 00:12:59.895 }, 00:12:59.895 { 00:12:59.895 "name": null, 00:12:59.895 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:12:59.895 "is_configured": false, 00:12:59.895 "data_offset": 0, 00:12:59.895 "data_size": 63488 00:12:59.895 } 00:12:59.895 ] 00:12:59.895 }' 00:12:59.895 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.895 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.463 [2024-10-30 10:41:21.798760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.463 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.464 "name": "Existed_Raid", 00:13:00.464 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:13:00.464 "strip_size_kb": 64, 00:13:00.464 "state": "configuring", 00:13:00.464 "raid_level": "raid0", 00:13:00.464 "superblock": true, 00:13:00.464 "num_base_bdevs": 3, 00:13:00.464 "num_base_bdevs_discovered": 2, 00:13:00.464 "num_base_bdevs_operational": 3, 00:13:00.464 "base_bdevs_list": [ 00:13:00.464 { 00:13:00.464 "name": "BaseBdev1", 00:13:00.464 "uuid": "49a76d73-4555-4d0d-ae09-28016033d8ac", 00:13:00.464 "is_configured": true, 00:13:00.464 "data_offset": 2048, 00:13:00.464 "data_size": 63488 00:13:00.464 }, 00:13:00.464 { 00:13:00.464 "name": null, 00:13:00.464 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:13:00.464 "is_configured": false, 00:13:00.464 "data_offset": 0, 00:13:00.464 "data_size": 63488 00:13:00.464 }, 00:13:00.464 { 00:13:00.464 "name": "BaseBdev3", 00:13:00.464 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:13:00.464 "is_configured": true, 00:13:00.464 "data_offset": 2048, 00:13:00.464 "data_size": 63488 00:13:00.464 } 00:13:00.464 ] 00:13:00.464 }' 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.464 10:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.032 [2024-10-30 10:41:22.362979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.032 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.291 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.291 "name": "Existed_Raid", 00:13:01.291 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:13:01.291 "strip_size_kb": 64, 00:13:01.291 "state": "configuring", 00:13:01.291 "raid_level": "raid0", 00:13:01.291 "superblock": true, 00:13:01.291 "num_base_bdevs": 3, 00:13:01.291 "num_base_bdevs_discovered": 1, 00:13:01.291 "num_base_bdevs_operational": 3, 00:13:01.291 "base_bdevs_list": [ 00:13:01.291 { 00:13:01.291 "name": null, 00:13:01.291 "uuid": "49a76d73-4555-4d0d-ae09-28016033d8ac", 00:13:01.291 "is_configured": false, 00:13:01.291 "data_offset": 0, 00:13:01.291 "data_size": 63488 00:13:01.291 }, 00:13:01.291 { 00:13:01.291 "name": null, 00:13:01.291 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:13:01.291 "is_configured": false, 00:13:01.291 "data_offset": 0, 00:13:01.291 "data_size": 63488 00:13:01.291 }, 00:13:01.291 { 00:13:01.291 "name": "BaseBdev3", 00:13:01.291 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:13:01.291 "is_configured": true, 00:13:01.291 "data_offset": 2048, 00:13:01.291 "data_size": 63488 00:13:01.291 } 00:13:01.291 ] 00:13:01.291 }' 00:13:01.291 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.291 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.550 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.550 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.550 10:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:01.550 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.550 10:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.550 [2024-10-30 10:41:23.012149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.550 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.551 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.551 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.551 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.809 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.810 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.810 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.810 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.810 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.810 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.810 "name": "Existed_Raid", 00:13:01.810 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:13:01.810 "strip_size_kb": 64, 00:13:01.810 "state": "configuring", 00:13:01.810 "raid_level": "raid0", 00:13:01.810 "superblock": true, 00:13:01.810 "num_base_bdevs": 3, 00:13:01.810 "num_base_bdevs_discovered": 2, 00:13:01.810 "num_base_bdevs_operational": 3, 00:13:01.810 "base_bdevs_list": [ 00:13:01.810 { 00:13:01.810 "name": null, 00:13:01.810 "uuid": "49a76d73-4555-4d0d-ae09-28016033d8ac", 00:13:01.810 "is_configured": false, 00:13:01.810 "data_offset": 0, 00:13:01.810 "data_size": 63488 00:13:01.810 }, 00:13:01.810 { 00:13:01.810 "name": "BaseBdev2", 00:13:01.810 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:13:01.810 "is_configured": true, 00:13:01.810 "data_offset": 2048, 00:13:01.810 "data_size": 63488 00:13:01.810 }, 00:13:01.810 { 00:13:01.810 "name": "BaseBdev3", 00:13:01.810 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:13:01.810 "is_configured": true, 00:13:01.810 "data_offset": 2048, 00:13:01.810 "data_size": 63488 00:13:01.810 } 00:13:01.810 ] 00:13:01.810 }' 00:13:01.810 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.810 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.068 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.068 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:02.068 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.068 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 49a76d73-4555-4d0d-ae09-28016033d8ac 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.328 [2024-10-30 10:41:23.679316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:02.328 [2024-10-30 10:41:23.679627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:02.328 [2024-10-30 10:41:23.679649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:02.328 NewBaseBdev 00:13:02.328 [2024-10-30 10:41:23.680008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:02.328 [2024-10-30 10:41:23.680317] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:02.328 [2024-10-30 10:41:23.680340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.328 [2024-10-30 10:41:23.680571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.328 [ 00:13:02.328 { 00:13:02.328 "name": "NewBaseBdev", 00:13:02.328 "aliases": [ 00:13:02.328 "49a76d73-4555-4d0d-ae09-28016033d8ac" 00:13:02.328 ], 00:13:02.328 "product_name": "Malloc disk", 00:13:02.328 "block_size": 512, 00:13:02.328 "num_blocks": 65536, 00:13:02.328 "uuid": "49a76d73-4555-4d0d-ae09-28016033d8ac", 00:13:02.328 "assigned_rate_limits": { 00:13:02.328 "rw_ios_per_sec": 0, 00:13:02.328 "rw_mbytes_per_sec": 0, 00:13:02.328 "r_mbytes_per_sec": 0, 00:13:02.328 "w_mbytes_per_sec": 0 00:13:02.328 }, 00:13:02.328 "claimed": true, 00:13:02.328 "claim_type": "exclusive_write", 00:13:02.328 "zoned": false, 00:13:02.328 "supported_io_types": { 00:13:02.328 "read": true, 00:13:02.328 "write": true, 00:13:02.328 "unmap": true, 00:13:02.328 "flush": true, 00:13:02.328 "reset": true, 00:13:02.328 "nvme_admin": false, 00:13:02.328 "nvme_io": false, 00:13:02.328 "nvme_io_md": false, 00:13:02.328 "write_zeroes": true, 00:13:02.328 "zcopy": true, 00:13:02.328 "get_zone_info": false, 00:13:02.328 "zone_management": false, 00:13:02.328 "zone_append": false, 00:13:02.328 "compare": false, 00:13:02.328 "compare_and_write": false, 00:13:02.328 "abort": true, 00:13:02.328 "seek_hole": false, 00:13:02.328 "seek_data": false, 00:13:02.328 "copy": true, 00:13:02.328 "nvme_iov_md": false 00:13:02.328 }, 00:13:02.328 "memory_domains": [ 00:13:02.328 { 00:13:02.328 "dma_device_id": "system", 00:13:02.328 "dma_device_type": 1 00:13:02.328 }, 00:13:02.328 { 00:13:02.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.328 "dma_device_type": 2 00:13:02.328 } 00:13:02.328 ], 00:13:02.328 "driver_specific": {} 00:13:02.328 } 00:13:02.328 ] 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.328 "name": "Existed_Raid", 00:13:02.328 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:13:02.328 "strip_size_kb": 64, 00:13:02.328 "state": "online", 00:13:02.328 "raid_level": "raid0", 00:13:02.328 "superblock": true, 00:13:02.328 "num_base_bdevs": 3, 00:13:02.328 "num_base_bdevs_discovered": 3, 00:13:02.328 "num_base_bdevs_operational": 3, 00:13:02.328 "base_bdevs_list": [ 00:13:02.328 { 00:13:02.328 "name": "NewBaseBdev", 00:13:02.328 "uuid": "49a76d73-4555-4d0d-ae09-28016033d8ac", 00:13:02.328 "is_configured": true, 00:13:02.328 "data_offset": 2048, 00:13:02.328 "data_size": 63488 00:13:02.328 }, 00:13:02.328 { 00:13:02.328 "name": "BaseBdev2", 00:13:02.328 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:13:02.328 "is_configured": true, 00:13:02.328 "data_offset": 2048, 00:13:02.328 "data_size": 63488 00:13:02.328 }, 00:13:02.328 { 00:13:02.328 "name": "BaseBdev3", 00:13:02.328 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:13:02.328 "is_configured": true, 00:13:02.328 "data_offset": 2048, 00:13:02.328 "data_size": 63488 00:13:02.328 } 00:13:02.328 ] 00:13:02.328 }' 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.328 10:41:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.896 [2024-10-30 10:41:24.211903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.896 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:02.896 "name": "Existed_Raid", 00:13:02.896 "aliases": [ 00:13:02.896 "96bfc469-b41e-4334-b474-2c552c152a79" 00:13:02.896 ], 00:13:02.896 "product_name": "Raid Volume", 00:13:02.896 "block_size": 512, 00:13:02.896 "num_blocks": 190464, 00:13:02.896 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:13:02.896 "assigned_rate_limits": { 00:13:02.896 "rw_ios_per_sec": 0, 00:13:02.896 "rw_mbytes_per_sec": 0, 00:13:02.896 "r_mbytes_per_sec": 0, 00:13:02.896 "w_mbytes_per_sec": 0 00:13:02.896 }, 00:13:02.896 "claimed": false, 00:13:02.896 "zoned": false, 00:13:02.896 "supported_io_types": { 00:13:02.896 "read": true, 00:13:02.896 "write": true, 00:13:02.896 "unmap": true, 00:13:02.896 "flush": true, 00:13:02.896 "reset": true, 00:13:02.896 "nvme_admin": false, 00:13:02.896 "nvme_io": false, 00:13:02.896 "nvme_io_md": false, 00:13:02.896 "write_zeroes": true, 00:13:02.896 "zcopy": false, 00:13:02.896 "get_zone_info": false, 00:13:02.896 "zone_management": false, 00:13:02.896 "zone_append": false, 00:13:02.896 "compare": false, 00:13:02.896 "compare_and_write": false, 00:13:02.896 "abort": false, 00:13:02.896 "seek_hole": false, 00:13:02.896 "seek_data": false, 00:13:02.896 "copy": false, 00:13:02.896 "nvme_iov_md": false 00:13:02.896 }, 00:13:02.896 "memory_domains": [ 00:13:02.896 { 00:13:02.896 "dma_device_id": "system", 00:13:02.896 "dma_device_type": 1 00:13:02.896 }, 00:13:02.896 { 00:13:02.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.896 "dma_device_type": 2 00:13:02.896 }, 00:13:02.896 { 00:13:02.896 "dma_device_id": "system", 00:13:02.897 "dma_device_type": 1 00:13:02.897 }, 00:13:02.897 { 00:13:02.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.897 "dma_device_type": 2 00:13:02.897 }, 00:13:02.897 { 00:13:02.897 "dma_device_id": "system", 00:13:02.897 "dma_device_type": 1 00:13:02.897 }, 00:13:02.897 { 00:13:02.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.897 "dma_device_type": 2 00:13:02.897 } 00:13:02.897 ], 00:13:02.897 "driver_specific": { 00:13:02.897 "raid": { 00:13:02.897 "uuid": "96bfc469-b41e-4334-b474-2c552c152a79", 00:13:02.897 "strip_size_kb": 64, 00:13:02.897 "state": "online", 00:13:02.897 "raid_level": "raid0", 00:13:02.897 "superblock": true, 00:13:02.897 "num_base_bdevs": 3, 00:13:02.897 "num_base_bdevs_discovered": 3, 00:13:02.897 "num_base_bdevs_operational": 3, 00:13:02.897 "base_bdevs_list": [ 00:13:02.897 { 00:13:02.897 "name": "NewBaseBdev", 00:13:02.897 "uuid": "49a76d73-4555-4d0d-ae09-28016033d8ac", 00:13:02.897 "is_configured": true, 00:13:02.897 "data_offset": 2048, 00:13:02.897 "data_size": 63488 00:13:02.897 }, 00:13:02.897 { 00:13:02.897 "name": "BaseBdev2", 00:13:02.897 "uuid": "eaab090a-6878-4c70-8a17-b9e0b3a37c98", 00:13:02.897 "is_configured": true, 00:13:02.897 "data_offset": 2048, 00:13:02.897 "data_size": 63488 00:13:02.897 }, 00:13:02.897 { 00:13:02.897 "name": "BaseBdev3", 00:13:02.897 "uuid": "98e295cb-d02c-492e-b2d4-8cdfdfa222b4", 00:13:02.897 "is_configured": true, 00:13:02.897 "data_offset": 2048, 00:13:02.897 "data_size": 63488 00:13:02.897 } 00:13:02.897 ] 00:13:02.897 } 00:13:02.897 } 00:13:02.897 }' 00:13:02.897 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:02.897 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:02.897 BaseBdev2 00:13:02.897 BaseBdev3' 00:13:02.897 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.157 [2024-10-30 10:41:24.535648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.157 [2024-10-30 10:41:24.535695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.157 [2024-10-30 10:41:24.535780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.157 [2024-10-30 10:41:24.535850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.157 [2024-10-30 10:41:24.535869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64619 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64619 ']' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64619 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64619 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:03.157 killing process with pid 64619 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64619' 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64619 00:13:03.157 [2024-10-30 10:41:24.575838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.157 10:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64619 00:13:03.416 [2024-10-30 10:41:24.851984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:04.790 10:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:04.790 00:13:04.790 real 0m11.837s 00:13:04.790 user 0m19.722s 00:13:04.790 sys 0m1.542s 00:13:04.790 10:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:04.790 10:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.790 ************************************ 00:13:04.790 END TEST raid_state_function_test_sb 00:13:04.790 ************************************ 00:13:04.790 10:41:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:13:04.790 10:41:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:04.790 10:41:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:04.790 10:41:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:04.790 ************************************ 00:13:04.790 START TEST raid_superblock_test 00:13:04.790 ************************************ 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65256 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65256 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65256 ']' 00:13:04.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:04.790 10:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.790 [2024-10-30 10:41:26.055507] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:13:04.790 [2024-10-30 10:41:26.055679] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65256 ] 00:13:04.790 [2024-10-30 10:41:26.227621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.049 [2024-10-30 10:41:26.359356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.308 [2024-10-30 10:41:26.564385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.308 [2024-10-30 10:41:26.564456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.876 malloc1 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.876 [2024-10-30 10:41:27.100239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:05.876 [2024-10-30 10:41:27.100317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.876 [2024-10-30 10:41:27.100351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:05.876 [2024-10-30 10:41:27.100366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.876 [2024-10-30 10:41:27.103230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.876 [2024-10-30 10:41:27.103277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:05.876 pt1 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.876 malloc2 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.876 [2024-10-30 10:41:27.152721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:05.876 [2024-10-30 10:41:27.152804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.876 [2024-10-30 10:41:27.152834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:05.876 [2024-10-30 10:41:27.152848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.876 [2024-10-30 10:41:27.155727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.876 [2024-10-30 10:41:27.155771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:05.876 pt2 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.876 malloc3 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.876 [2024-10-30 10:41:27.218749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:05.876 [2024-10-30 10:41:27.218814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.876 [2024-10-30 10:41:27.218845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:05.876 [2024-10-30 10:41:27.218860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.876 [2024-10-30 10:41:27.221730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.876 [2024-10-30 10:41:27.221773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:05.876 pt3 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.876 [2024-10-30 10:41:27.230803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:05.876 [2024-10-30 10:41:27.233356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:05.876 [2024-10-30 10:41:27.233451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:05.876 [2024-10-30 10:41:27.233663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:05.876 [2024-10-30 10:41:27.233685] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:05.876 [2024-10-30 10:41:27.234029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:05.876 [2024-10-30 10:41:27.234264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:05.876 [2024-10-30 10:41:27.234281] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:05.876 [2024-10-30 10:41:27.234474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:05.876 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.877 "name": "raid_bdev1", 00:13:05.877 "uuid": "29252eac-fb14-4b76-9cb3-a4efadc6fc2b", 00:13:05.877 "strip_size_kb": 64, 00:13:05.877 "state": "online", 00:13:05.877 "raid_level": "raid0", 00:13:05.877 "superblock": true, 00:13:05.877 "num_base_bdevs": 3, 00:13:05.877 "num_base_bdevs_discovered": 3, 00:13:05.877 "num_base_bdevs_operational": 3, 00:13:05.877 "base_bdevs_list": [ 00:13:05.877 { 00:13:05.877 "name": "pt1", 00:13:05.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:05.877 "is_configured": true, 00:13:05.877 "data_offset": 2048, 00:13:05.877 "data_size": 63488 00:13:05.877 }, 00:13:05.877 { 00:13:05.877 "name": "pt2", 00:13:05.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.877 "is_configured": true, 00:13:05.877 "data_offset": 2048, 00:13:05.877 "data_size": 63488 00:13:05.877 }, 00:13:05.877 { 00:13:05.877 "name": "pt3", 00:13:05.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.877 "is_configured": true, 00:13:05.877 "data_offset": 2048, 00:13:05.877 "data_size": 63488 00:13:05.877 } 00:13:05.877 ] 00:13:05.877 }' 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.877 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.444 [2024-10-30 10:41:27.735338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.444 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:06.445 "name": "raid_bdev1", 00:13:06.445 "aliases": [ 00:13:06.445 "29252eac-fb14-4b76-9cb3-a4efadc6fc2b" 00:13:06.445 ], 00:13:06.445 "product_name": "Raid Volume", 00:13:06.445 "block_size": 512, 00:13:06.445 "num_blocks": 190464, 00:13:06.445 "uuid": "29252eac-fb14-4b76-9cb3-a4efadc6fc2b", 00:13:06.445 "assigned_rate_limits": { 00:13:06.445 "rw_ios_per_sec": 0, 00:13:06.445 "rw_mbytes_per_sec": 0, 00:13:06.445 "r_mbytes_per_sec": 0, 00:13:06.445 "w_mbytes_per_sec": 0 00:13:06.445 }, 00:13:06.445 "claimed": false, 00:13:06.445 "zoned": false, 00:13:06.445 "supported_io_types": { 00:13:06.445 "read": true, 00:13:06.445 "write": true, 00:13:06.445 "unmap": true, 00:13:06.445 "flush": true, 00:13:06.445 "reset": true, 00:13:06.445 "nvme_admin": false, 00:13:06.445 "nvme_io": false, 00:13:06.445 "nvme_io_md": false, 00:13:06.445 "write_zeroes": true, 00:13:06.445 "zcopy": false, 00:13:06.445 "get_zone_info": false, 00:13:06.445 "zone_management": false, 00:13:06.445 "zone_append": false, 00:13:06.445 "compare": false, 00:13:06.445 "compare_and_write": false, 00:13:06.445 "abort": false, 00:13:06.445 "seek_hole": false, 00:13:06.445 "seek_data": false, 00:13:06.445 "copy": false, 00:13:06.445 "nvme_iov_md": false 00:13:06.445 }, 00:13:06.445 "memory_domains": [ 00:13:06.445 { 00:13:06.445 "dma_device_id": "system", 00:13:06.445 "dma_device_type": 1 00:13:06.445 }, 00:13:06.445 { 00:13:06.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.445 "dma_device_type": 2 00:13:06.445 }, 00:13:06.445 { 00:13:06.445 "dma_device_id": "system", 00:13:06.445 "dma_device_type": 1 00:13:06.445 }, 00:13:06.445 { 00:13:06.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.445 "dma_device_type": 2 00:13:06.445 }, 00:13:06.445 { 00:13:06.445 "dma_device_id": "system", 00:13:06.445 "dma_device_type": 1 00:13:06.445 }, 00:13:06.445 { 00:13:06.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.445 "dma_device_type": 2 00:13:06.445 } 00:13:06.445 ], 00:13:06.445 "driver_specific": { 00:13:06.445 "raid": { 00:13:06.445 "uuid": "29252eac-fb14-4b76-9cb3-a4efadc6fc2b", 00:13:06.445 "strip_size_kb": 64, 00:13:06.445 "state": "online", 00:13:06.445 "raid_level": "raid0", 00:13:06.445 "superblock": true, 00:13:06.445 "num_base_bdevs": 3, 00:13:06.445 "num_base_bdevs_discovered": 3, 00:13:06.445 "num_base_bdevs_operational": 3, 00:13:06.445 "base_bdevs_list": [ 00:13:06.445 { 00:13:06.445 "name": "pt1", 00:13:06.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.445 "is_configured": true, 00:13:06.445 "data_offset": 2048, 00:13:06.445 "data_size": 63488 00:13:06.445 }, 00:13:06.445 { 00:13:06.445 "name": "pt2", 00:13:06.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.445 "is_configured": true, 00:13:06.445 "data_offset": 2048, 00:13:06.445 "data_size": 63488 00:13:06.445 }, 00:13:06.445 { 00:13:06.445 "name": "pt3", 00:13:06.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.445 "is_configured": true, 00:13:06.445 "data_offset": 2048, 00:13:06.445 "data_size": 63488 00:13:06.445 } 00:13:06.445 ] 00:13:06.445 } 00:13:06.445 } 00:13:06.445 }' 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:06.445 pt2 00:13:06.445 pt3' 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.445 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.704 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.704 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.704 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.704 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:06.704 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.704 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.704 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.704 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.704 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.705 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.705 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:06.705 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:06.705 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.705 10:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.705 10:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.705 [2024-10-30 10:41:28.051361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=29252eac-fb14-4b76-9cb3-a4efadc6fc2b 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 29252eac-fb14-4b76-9cb3-a4efadc6fc2b ']' 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.705 [2024-10-30 10:41:28.095025] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.705 [2024-10-30 10:41:28.095069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.705 [2024-10-30 10:41:28.095164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.705 [2024-10-30 10:41:28.095242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.705 [2024-10-30 10:41:28.095257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.705 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.964 [2024-10-30 10:41:28.239134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:06.964 [2024-10-30 10:41:28.241609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:06.964 [2024-10-30 10:41:28.241701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:06.964 [2024-10-30 10:41:28.241789] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:06.964 [2024-10-30 10:41:28.241862] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:06.964 [2024-10-30 10:41:28.241893] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:06.964 [2024-10-30 10:41:28.241919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.964 [2024-10-30 10:41:28.241934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:06.964 request: 00:13:06.964 { 00:13:06.964 "name": "raid_bdev1", 00:13:06.964 "raid_level": "raid0", 00:13:06.964 "base_bdevs": [ 00:13:06.964 "malloc1", 00:13:06.964 "malloc2", 00:13:06.964 "malloc3" 00:13:06.964 ], 00:13:06.964 "strip_size_kb": 64, 00:13:06.964 "superblock": false, 00:13:06.964 "method": "bdev_raid_create", 00:13:06.964 "req_id": 1 00:13:06.964 } 00:13:06.964 Got JSON-RPC error response 00:13:06.964 response: 00:13:06.964 { 00:13:06.964 "code": -17, 00:13:06.964 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:06.964 } 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:06.964 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 [2024-10-30 10:41:28.307085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:06.965 [2024-10-30 10:41:28.307143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.965 [2024-10-30 10:41:28.307171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:06.965 [2024-10-30 10:41:28.307186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.965 [2024-10-30 10:41:28.310017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.965 [2024-10-30 10:41:28.310055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:06.965 [2024-10-30 10:41:28.310153] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:06.965 [2024-10-30 10:41:28.310222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:06.965 pt1 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.965 "name": "raid_bdev1", 00:13:06.965 "uuid": "29252eac-fb14-4b76-9cb3-a4efadc6fc2b", 00:13:06.965 "strip_size_kb": 64, 00:13:06.965 "state": "configuring", 00:13:06.965 "raid_level": "raid0", 00:13:06.965 "superblock": true, 00:13:06.965 "num_base_bdevs": 3, 00:13:06.965 "num_base_bdevs_discovered": 1, 00:13:06.965 "num_base_bdevs_operational": 3, 00:13:06.965 "base_bdevs_list": [ 00:13:06.965 { 00:13:06.965 "name": "pt1", 00:13:06.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.965 "is_configured": true, 00:13:06.965 "data_offset": 2048, 00:13:06.965 "data_size": 63488 00:13:06.965 }, 00:13:06.965 { 00:13:06.965 "name": null, 00:13:06.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.965 "is_configured": false, 00:13:06.965 "data_offset": 2048, 00:13:06.965 "data_size": 63488 00:13:06.965 }, 00:13:06.965 { 00:13:06.965 "name": null, 00:13:06.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.965 "is_configured": false, 00:13:06.965 "data_offset": 2048, 00:13:06.965 "data_size": 63488 00:13:06.965 } 00:13:06.965 ] 00:13:06.965 }' 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.965 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.531 [2024-10-30 10:41:28.819280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:07.531 [2024-10-30 10:41:28.819350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.531 [2024-10-30 10:41:28.819383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:07.531 [2024-10-30 10:41:28.819404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.531 [2024-10-30 10:41:28.819958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.531 [2024-10-30 10:41:28.820004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:07.531 [2024-10-30 10:41:28.820113] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:07.531 [2024-10-30 10:41:28.820145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:07.531 pt2 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.531 [2024-10-30 10:41:28.827270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.531 "name": "raid_bdev1", 00:13:07.531 "uuid": "29252eac-fb14-4b76-9cb3-a4efadc6fc2b", 00:13:07.531 "strip_size_kb": 64, 00:13:07.531 "state": "configuring", 00:13:07.531 "raid_level": "raid0", 00:13:07.531 "superblock": true, 00:13:07.531 "num_base_bdevs": 3, 00:13:07.531 "num_base_bdevs_discovered": 1, 00:13:07.531 "num_base_bdevs_operational": 3, 00:13:07.531 "base_bdevs_list": [ 00:13:07.531 { 00:13:07.531 "name": "pt1", 00:13:07.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.531 "is_configured": true, 00:13:07.531 "data_offset": 2048, 00:13:07.531 "data_size": 63488 00:13:07.531 }, 00:13:07.531 { 00:13:07.531 "name": null, 00:13:07.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.531 "is_configured": false, 00:13:07.531 "data_offset": 0, 00:13:07.531 "data_size": 63488 00:13:07.531 }, 00:13:07.531 { 00:13:07.531 "name": null, 00:13:07.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.531 "is_configured": false, 00:13:07.531 "data_offset": 2048, 00:13:07.531 "data_size": 63488 00:13:07.531 } 00:13:07.531 ] 00:13:07.531 }' 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.531 10:41:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.096 [2024-10-30 10:41:29.364347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:08.096 [2024-10-30 10:41:29.364465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.096 [2024-10-30 10:41:29.364491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:08.096 [2024-10-30 10:41:29.364508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.096 [2024-10-30 10:41:29.365111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.096 [2024-10-30 10:41:29.365142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:08.096 [2024-10-30 10:41:29.365239] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:08.096 [2024-10-30 10:41:29.365275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:08.096 pt2 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.096 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.096 [2024-10-30 10:41:29.376321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:08.096 [2024-10-30 10:41:29.376375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.096 [2024-10-30 10:41:29.376424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:08.096 [2024-10-30 10:41:29.376439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.096 [2024-10-30 10:41:29.376882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.096 [2024-10-30 10:41:29.376914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:08.096 [2024-10-30 10:41:29.377002] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:08.097 [2024-10-30 10:41:29.377052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:08.097 [2024-10-30 10:41:29.377198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:08.097 [2024-10-30 10:41:29.377218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:08.097 [2024-10-30 10:41:29.377533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:08.097 [2024-10-30 10:41:29.377725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:08.097 [2024-10-30 10:41:29.377749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:08.097 [2024-10-30 10:41:29.377924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.097 pt3 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.097 "name": "raid_bdev1", 00:13:08.097 "uuid": "29252eac-fb14-4b76-9cb3-a4efadc6fc2b", 00:13:08.097 "strip_size_kb": 64, 00:13:08.097 "state": "online", 00:13:08.097 "raid_level": "raid0", 00:13:08.097 "superblock": true, 00:13:08.097 "num_base_bdevs": 3, 00:13:08.097 "num_base_bdevs_discovered": 3, 00:13:08.097 "num_base_bdevs_operational": 3, 00:13:08.097 "base_bdevs_list": [ 00:13:08.097 { 00:13:08.097 "name": "pt1", 00:13:08.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.097 "is_configured": true, 00:13:08.097 "data_offset": 2048, 00:13:08.097 "data_size": 63488 00:13:08.097 }, 00:13:08.097 { 00:13:08.097 "name": "pt2", 00:13:08.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.097 "is_configured": true, 00:13:08.097 "data_offset": 2048, 00:13:08.097 "data_size": 63488 00:13:08.097 }, 00:13:08.097 { 00:13:08.097 "name": "pt3", 00:13:08.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.097 "is_configured": true, 00:13:08.097 "data_offset": 2048, 00:13:08.097 "data_size": 63488 00:13:08.097 } 00:13:08.097 ] 00:13:08.097 }' 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.097 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.661 [2024-10-30 10:41:29.916885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.661 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:08.661 "name": "raid_bdev1", 00:13:08.661 "aliases": [ 00:13:08.661 "29252eac-fb14-4b76-9cb3-a4efadc6fc2b" 00:13:08.661 ], 00:13:08.661 "product_name": "Raid Volume", 00:13:08.661 "block_size": 512, 00:13:08.661 "num_blocks": 190464, 00:13:08.661 "uuid": "29252eac-fb14-4b76-9cb3-a4efadc6fc2b", 00:13:08.661 "assigned_rate_limits": { 00:13:08.661 "rw_ios_per_sec": 0, 00:13:08.661 "rw_mbytes_per_sec": 0, 00:13:08.661 "r_mbytes_per_sec": 0, 00:13:08.661 "w_mbytes_per_sec": 0 00:13:08.661 }, 00:13:08.661 "claimed": false, 00:13:08.661 "zoned": false, 00:13:08.661 "supported_io_types": { 00:13:08.661 "read": true, 00:13:08.662 "write": true, 00:13:08.662 "unmap": true, 00:13:08.662 "flush": true, 00:13:08.662 "reset": true, 00:13:08.662 "nvme_admin": false, 00:13:08.662 "nvme_io": false, 00:13:08.662 "nvme_io_md": false, 00:13:08.662 "write_zeroes": true, 00:13:08.662 "zcopy": false, 00:13:08.662 "get_zone_info": false, 00:13:08.662 "zone_management": false, 00:13:08.662 "zone_append": false, 00:13:08.662 "compare": false, 00:13:08.662 "compare_and_write": false, 00:13:08.662 "abort": false, 00:13:08.662 "seek_hole": false, 00:13:08.662 "seek_data": false, 00:13:08.662 "copy": false, 00:13:08.662 "nvme_iov_md": false 00:13:08.662 }, 00:13:08.662 "memory_domains": [ 00:13:08.662 { 00:13:08.662 "dma_device_id": "system", 00:13:08.662 "dma_device_type": 1 00:13:08.662 }, 00:13:08.662 { 00:13:08.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.662 "dma_device_type": 2 00:13:08.662 }, 00:13:08.662 { 00:13:08.662 "dma_device_id": "system", 00:13:08.662 "dma_device_type": 1 00:13:08.662 }, 00:13:08.662 { 00:13:08.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.662 "dma_device_type": 2 00:13:08.662 }, 00:13:08.662 { 00:13:08.662 "dma_device_id": "system", 00:13:08.662 "dma_device_type": 1 00:13:08.662 }, 00:13:08.662 { 00:13:08.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.662 "dma_device_type": 2 00:13:08.662 } 00:13:08.662 ], 00:13:08.662 "driver_specific": { 00:13:08.662 "raid": { 00:13:08.662 "uuid": "29252eac-fb14-4b76-9cb3-a4efadc6fc2b", 00:13:08.662 "strip_size_kb": 64, 00:13:08.662 "state": "online", 00:13:08.662 "raid_level": "raid0", 00:13:08.662 "superblock": true, 00:13:08.662 "num_base_bdevs": 3, 00:13:08.662 "num_base_bdevs_discovered": 3, 00:13:08.662 "num_base_bdevs_operational": 3, 00:13:08.662 "base_bdevs_list": [ 00:13:08.662 { 00:13:08.662 "name": "pt1", 00:13:08.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.662 "is_configured": true, 00:13:08.662 "data_offset": 2048, 00:13:08.662 "data_size": 63488 00:13:08.662 }, 00:13:08.662 { 00:13:08.662 "name": "pt2", 00:13:08.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.662 "is_configured": true, 00:13:08.662 "data_offset": 2048, 00:13:08.662 "data_size": 63488 00:13:08.662 }, 00:13:08.662 { 00:13:08.662 "name": "pt3", 00:13:08.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.662 "is_configured": true, 00:13:08.662 "data_offset": 2048, 00:13:08.662 "data_size": 63488 00:13:08.662 } 00:13:08.662 ] 00:13:08.662 } 00:13:08.662 } 00:13:08.662 }' 00:13:08.662 10:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:08.662 pt2 00:13:08.662 pt3' 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.662 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.919 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.920 [2024-10-30 10:41:30.248935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 29252eac-fb14-4b76-9cb3-a4efadc6fc2b '!=' 29252eac-fb14-4b76-9cb3-a4efadc6fc2b ']' 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65256 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65256 ']' 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65256 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65256 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:08.920 killing process with pid 65256 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65256' 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65256 00:13:08.920 [2024-10-30 10:41:30.327150] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.920 10:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65256 00:13:08.920 [2024-10-30 10:41:30.327272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.920 [2024-10-30 10:41:30.327349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.920 [2024-10-30 10:41:30.327368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:09.177 [2024-10-30 10:41:30.603758] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.554 10:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:10.554 00:13:10.554 real 0m5.651s 00:13:10.554 user 0m8.589s 00:13:10.554 sys 0m0.784s 00:13:10.554 10:41:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:10.554 ************************************ 00:13:10.554 END TEST raid_superblock_test 00:13:10.554 ************************************ 00:13:10.554 10:41:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.554 10:41:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:13:10.554 10:41:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:10.554 10:41:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:10.554 10:41:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.554 ************************************ 00:13:10.554 START TEST raid_read_error_test 00:13:10.554 ************************************ 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kWk5dJoXqp 00:13:10.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65514 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65514 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65514 ']' 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:10.554 10:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.554 [2024-10-30 10:41:31.785050] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:13:10.554 [2024-10-30 10:41:31.785251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65514 ] 00:13:10.554 [2024-10-30 10:41:31.968539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.813 [2024-10-30 10:41:32.093726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.072 [2024-10-30 10:41:32.294943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.072 [2024-10-30 10:41:32.295030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.330 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:11.330 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:11.330 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.330 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.330 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.330 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.588 BaseBdev1_malloc 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.588 true 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.588 [2024-10-30 10:41:32.839471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:11.588 [2024-10-30 10:41:32.839541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.588 [2024-10-30 10:41:32.839570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:11.588 [2024-10-30 10:41:32.839588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.588 [2024-10-30 10:41:32.842385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.588 [2024-10-30 10:41:32.842449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.588 BaseBdev1 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.588 BaseBdev2_malloc 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.588 true 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.588 [2024-10-30 10:41:32.904048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:11.588 [2024-10-30 10:41:32.904115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.588 [2024-10-30 10:41:32.904139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:11.588 [2024-10-30 10:41:32.904156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.588 [2024-10-30 10:41:32.906865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.588 [2024-10-30 10:41:32.906931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:11.588 BaseBdev2 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.588 BaseBdev3_malloc 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.588 true 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.588 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.589 [2024-10-30 10:41:32.981823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:11.589 [2024-10-30 10:41:32.981889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.589 [2024-10-30 10:41:32.981917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:11.589 [2024-10-30 10:41:32.981935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.589 [2024-10-30 10:41:32.984739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.589 [2024-10-30 10:41:32.984807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:11.589 BaseBdev3 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.589 [2024-10-30 10:41:32.989902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.589 [2024-10-30 10:41:32.992294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.589 [2024-10-30 10:41:32.992439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.589 [2024-10-30 10:41:32.992708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:11.589 [2024-10-30 10:41:32.992739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:11.589 [2024-10-30 10:41:32.993072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:11.589 [2024-10-30 10:41:32.993287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:11.589 [2024-10-30 10:41:32.993320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:11.589 [2024-10-30 10:41:32.993500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.589 10:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.589 10:41:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.589 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.589 "name": "raid_bdev1", 00:13:11.589 "uuid": "210400d4-078f-42ae-8566-db496eef9be9", 00:13:11.589 "strip_size_kb": 64, 00:13:11.589 "state": "online", 00:13:11.589 "raid_level": "raid0", 00:13:11.589 "superblock": true, 00:13:11.589 "num_base_bdevs": 3, 00:13:11.589 "num_base_bdevs_discovered": 3, 00:13:11.589 "num_base_bdevs_operational": 3, 00:13:11.589 "base_bdevs_list": [ 00:13:11.589 { 00:13:11.589 "name": "BaseBdev1", 00:13:11.589 "uuid": "71c822c7-0e14-5e17-854e-3b14ca92afbb", 00:13:11.589 "is_configured": true, 00:13:11.589 "data_offset": 2048, 00:13:11.589 "data_size": 63488 00:13:11.589 }, 00:13:11.589 { 00:13:11.589 "name": "BaseBdev2", 00:13:11.589 "uuid": "7886f604-bffe-5f6b-9893-5139fb55ccba", 00:13:11.589 "is_configured": true, 00:13:11.589 "data_offset": 2048, 00:13:11.589 "data_size": 63488 00:13:11.589 }, 00:13:11.589 { 00:13:11.589 "name": "BaseBdev3", 00:13:11.589 "uuid": "fa59ddba-5979-5577-9bd4-bd21e4a0d2d1", 00:13:11.589 "is_configured": true, 00:13:11.589 "data_offset": 2048, 00:13:11.589 "data_size": 63488 00:13:11.589 } 00:13:11.589 ] 00:13:11.589 }' 00:13:11.589 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.589 10:41:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.153 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:12.153 10:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:12.410 [2024-10-30 10:41:33.631552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.342 "name": "raid_bdev1", 00:13:13.342 "uuid": "210400d4-078f-42ae-8566-db496eef9be9", 00:13:13.342 "strip_size_kb": 64, 00:13:13.342 "state": "online", 00:13:13.342 "raid_level": "raid0", 00:13:13.342 "superblock": true, 00:13:13.342 "num_base_bdevs": 3, 00:13:13.342 "num_base_bdevs_discovered": 3, 00:13:13.342 "num_base_bdevs_operational": 3, 00:13:13.342 "base_bdevs_list": [ 00:13:13.342 { 00:13:13.342 "name": "BaseBdev1", 00:13:13.342 "uuid": "71c822c7-0e14-5e17-854e-3b14ca92afbb", 00:13:13.342 "is_configured": true, 00:13:13.342 "data_offset": 2048, 00:13:13.342 "data_size": 63488 00:13:13.342 }, 00:13:13.342 { 00:13:13.342 "name": "BaseBdev2", 00:13:13.342 "uuid": "7886f604-bffe-5f6b-9893-5139fb55ccba", 00:13:13.342 "is_configured": true, 00:13:13.342 "data_offset": 2048, 00:13:13.342 "data_size": 63488 00:13:13.342 }, 00:13:13.342 { 00:13:13.342 "name": "BaseBdev3", 00:13:13.342 "uuid": "fa59ddba-5979-5577-9bd4-bd21e4a0d2d1", 00:13:13.342 "is_configured": true, 00:13:13.342 "data_offset": 2048, 00:13:13.342 "data_size": 63488 00:13:13.342 } 00:13:13.342 ] 00:13:13.342 }' 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.342 10:41:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.600 [2024-10-30 10:41:35.046299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.600 [2024-10-30 10:41:35.046338] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.600 [2024-10-30 10:41:35.049919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.600 [2024-10-30 10:41:35.049993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.600 [2024-10-30 10:41:35.050048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.600 [2024-10-30 10:41:35.050063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65514 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65514 ']' 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65514 00:13:13.600 { 00:13:13.600 "results": [ 00:13:13.600 { 00:13:13.600 "job": "raid_bdev1", 00:13:13.600 "core_mask": "0x1", 00:13:13.600 "workload": "randrw", 00:13:13.600 "percentage": 50, 00:13:13.600 "status": "finished", 00:13:13.600 "queue_depth": 1, 00:13:13.600 "io_size": 131072, 00:13:13.600 "runtime": 1.412198, 00:13:13.600 "iops": 11266.125571626642, 00:13:13.600 "mibps": 1408.2656964533303, 00:13:13.600 "io_failed": 1, 00:13:13.600 "io_timeout": 0, 00:13:13.600 "avg_latency_us": 123.92494706349525, 00:13:13.600 "min_latency_us": 38.167272727272724, 00:13:13.600 "max_latency_us": 1876.7127272727273 00:13:13.600 } 00:13:13.600 ], 00:13:13.600 "core_count": 1 00:13:13.600 } 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.600 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65514 00:13:13.857 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.857 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.857 killing process with pid 65514 00:13:13.857 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65514' 00:13:13.857 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65514 00:13:13.857 [2024-10-30 10:41:35.083280] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.857 10:41:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65514 00:13:13.857 [2024-10-30 10:41:35.285146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kWk5dJoXqp 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:15.229 00:13:15.229 real 0m4.688s 00:13:15.229 user 0m5.852s 00:13:15.229 sys 0m0.569s 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:15.229 10:41:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.229 ************************************ 00:13:15.229 END TEST raid_read_error_test 00:13:15.229 ************************************ 00:13:15.229 10:41:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:13:15.229 10:41:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:15.229 10:41:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:15.229 10:41:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.229 ************************************ 00:13:15.229 START TEST raid_write_error_test 00:13:15.229 ************************************ 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.229 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.X4UgyVEi6z 00:13:15.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65660 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65660 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65660 ']' 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:15.230 10:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.230 [2024-10-30 10:41:36.511979] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:13:15.230 [2024-10-30 10:41:36.512143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65660 ] 00:13:15.230 [2024-10-30 10:41:36.686089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.488 [2024-10-30 10:41:36.816090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.746 [2024-10-30 10:41:37.021821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.746 [2024-10-30 10:41:37.021865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.312 BaseBdev1_malloc 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.312 true 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.312 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.313 [2024-10-30 10:41:37.616809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:16.313 [2024-10-30 10:41:37.616878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.313 [2024-10-30 10:41:37.616907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:16.313 [2024-10-30 10:41:37.616925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.313 [2024-10-30 10:41:37.619754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.313 [2024-10-30 10:41:37.619940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.313 BaseBdev1 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.313 BaseBdev2_malloc 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.313 true 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.313 [2024-10-30 10:41:37.677912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:16.313 [2024-10-30 10:41:37.678000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.313 [2024-10-30 10:41:37.678026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:16.313 [2024-10-30 10:41:37.678043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.313 [2024-10-30 10:41:37.680918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.313 [2024-10-30 10:41:37.680967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:16.313 BaseBdev2 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.313 BaseBdev3_malloc 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.313 true 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.313 [2024-10-30 10:41:37.751627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:16.313 [2024-10-30 10:41:37.751827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.313 [2024-10-30 10:41:37.751865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:16.313 [2024-10-30 10:41:37.751883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.313 [2024-10-30 10:41:37.754679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.313 [2024-10-30 10:41:37.754725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:16.313 BaseBdev3 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.313 [2024-10-30 10:41:37.759709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.313 [2024-10-30 10:41:37.762112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.313 [2024-10-30 10:41:37.762226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.313 [2024-10-30 10:41:37.762488] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:16.313 [2024-10-30 10:41:37.762508] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:16.313 [2024-10-30 10:41:37.762819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:16.313 [2024-10-30 10:41:37.763054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:16.313 [2024-10-30 10:41:37.763078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:16.313 [2024-10-30 10:41:37.763267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.313 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.571 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.571 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.571 "name": "raid_bdev1", 00:13:16.571 "uuid": "aa85e2f6-8c37-4001-ae69-449c4537cbc3", 00:13:16.571 "strip_size_kb": 64, 00:13:16.571 "state": "online", 00:13:16.571 "raid_level": "raid0", 00:13:16.571 "superblock": true, 00:13:16.571 "num_base_bdevs": 3, 00:13:16.571 "num_base_bdevs_discovered": 3, 00:13:16.571 "num_base_bdevs_operational": 3, 00:13:16.571 "base_bdevs_list": [ 00:13:16.571 { 00:13:16.571 "name": "BaseBdev1", 00:13:16.571 "uuid": "ff59bd77-cf7c-5545-aafb-91405c3b2b89", 00:13:16.571 "is_configured": true, 00:13:16.571 "data_offset": 2048, 00:13:16.571 "data_size": 63488 00:13:16.571 }, 00:13:16.571 { 00:13:16.571 "name": "BaseBdev2", 00:13:16.571 "uuid": "13e6a298-f484-5679-be2f-f167944b1ea3", 00:13:16.571 "is_configured": true, 00:13:16.571 "data_offset": 2048, 00:13:16.571 "data_size": 63488 00:13:16.571 }, 00:13:16.571 { 00:13:16.571 "name": "BaseBdev3", 00:13:16.571 "uuid": "9d6281b8-6250-58c2-9fa1-3635231db8e1", 00:13:16.571 "is_configured": true, 00:13:16.571 "data_offset": 2048, 00:13:16.571 "data_size": 63488 00:13:16.571 } 00:13:16.571 ] 00:13:16.571 }' 00:13:16.571 10:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.571 10:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.829 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:16.830 10:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:17.144 [2024-10-30 10:41:38.413237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.088 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.088 "name": "raid_bdev1", 00:13:18.088 "uuid": "aa85e2f6-8c37-4001-ae69-449c4537cbc3", 00:13:18.088 "strip_size_kb": 64, 00:13:18.088 "state": "online", 00:13:18.088 "raid_level": "raid0", 00:13:18.088 "superblock": true, 00:13:18.088 "num_base_bdevs": 3, 00:13:18.088 "num_base_bdevs_discovered": 3, 00:13:18.088 "num_base_bdevs_operational": 3, 00:13:18.088 "base_bdevs_list": [ 00:13:18.088 { 00:13:18.088 "name": "BaseBdev1", 00:13:18.088 "uuid": "ff59bd77-cf7c-5545-aafb-91405c3b2b89", 00:13:18.088 "is_configured": true, 00:13:18.088 "data_offset": 2048, 00:13:18.088 "data_size": 63488 00:13:18.088 }, 00:13:18.088 { 00:13:18.088 "name": "BaseBdev2", 00:13:18.089 "uuid": "13e6a298-f484-5679-be2f-f167944b1ea3", 00:13:18.089 "is_configured": true, 00:13:18.089 "data_offset": 2048, 00:13:18.089 "data_size": 63488 00:13:18.089 }, 00:13:18.089 { 00:13:18.089 "name": "BaseBdev3", 00:13:18.089 "uuid": "9d6281b8-6250-58c2-9fa1-3635231db8e1", 00:13:18.089 "is_configured": true, 00:13:18.089 "data_offset": 2048, 00:13:18.089 "data_size": 63488 00:13:18.089 } 00:13:18.089 ] 00:13:18.089 }' 00:13:18.089 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.089 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.353 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:18.354 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.354 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.354 [2024-10-30 10:41:39.820582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.354 [2024-10-30 10:41:39.820621] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.617 [2024-10-30 10:41:39.823881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.617 [2024-10-30 10:41:39.823943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.617 [2024-10-30 10:41:39.824008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.617 [2024-10-30 10:41:39.824024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:18.617 { 00:13:18.617 "results": [ 00:13:18.617 { 00:13:18.617 "job": "raid_bdev1", 00:13:18.617 "core_mask": "0x1", 00:13:18.617 "workload": "randrw", 00:13:18.617 "percentage": 50, 00:13:18.617 "status": "finished", 00:13:18.617 "queue_depth": 1, 00:13:18.617 "io_size": 131072, 00:13:18.617 "runtime": 1.404944, 00:13:18.617 "iops": 11085.851108656288, 00:13:18.617 "mibps": 1385.731388582036, 00:13:18.617 "io_failed": 1, 00:13:18.617 "io_timeout": 0, 00:13:18.617 "avg_latency_us": 125.33696409394406, 00:13:18.617 "min_latency_us": 40.02909090909091, 00:13:18.617 "max_latency_us": 1839.4763636363637 00:13:18.617 } 00:13:18.617 ], 00:13:18.617 "core_count": 1 00:13:18.617 } 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65660 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65660 ']' 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65660 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65660 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:18.617 killing process with pid 65660 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65660' 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65660 00:13:18.617 [2024-10-30 10:41:39.860344] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.617 10:41:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65660 00:13:18.617 [2024-10-30 10:41:40.066652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.X4UgyVEi6z 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:19.989 00:13:19.989 real 0m4.731s 00:13:19.989 user 0m5.922s 00:13:19.989 sys 0m0.586s 00:13:19.989 ************************************ 00:13:19.989 END TEST raid_write_error_test 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.989 10:41:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.989 ************************************ 00:13:19.989 10:41:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:19.989 10:41:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:13:19.989 10:41:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:19.989 10:41:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.989 10:41:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.989 ************************************ 00:13:19.989 START TEST raid_state_function_test 00:13:19.989 ************************************ 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:19.989 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65804 00:13:19.990 Process raid pid: 65804 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65804' 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65804 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65804 ']' 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.990 10:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.990 [2024-10-30 10:41:41.312573] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:13:19.990 [2024-10-30 10:41:41.312762] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.247 [2024-10-30 10:41:41.496319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.247 [2024-10-30 10:41:41.622780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.506 [2024-10-30 10:41:41.826247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.506 [2024-10-30 10:41:41.826303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 [2024-10-30 10:41:42.251399] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.077 [2024-10-30 10:41:42.251459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.077 [2024-10-30 10:41:42.251475] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.077 [2024-10-30 10:41:42.251492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.077 [2024-10-30 10:41:42.251502] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.077 [2024-10-30 10:41:42.251517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.077 "name": "Existed_Raid", 00:13:21.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.077 "strip_size_kb": 64, 00:13:21.077 "state": "configuring", 00:13:21.077 "raid_level": "concat", 00:13:21.077 "superblock": false, 00:13:21.077 "num_base_bdevs": 3, 00:13:21.077 "num_base_bdevs_discovered": 0, 00:13:21.077 "num_base_bdevs_operational": 3, 00:13:21.077 "base_bdevs_list": [ 00:13:21.077 { 00:13:21.077 "name": "BaseBdev1", 00:13:21.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.077 "is_configured": false, 00:13:21.077 "data_offset": 0, 00:13:21.077 "data_size": 0 00:13:21.077 }, 00:13:21.077 { 00:13:21.077 "name": "BaseBdev2", 00:13:21.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.077 "is_configured": false, 00:13:21.077 "data_offset": 0, 00:13:21.077 "data_size": 0 00:13:21.077 }, 00:13:21.077 { 00:13:21.077 "name": "BaseBdev3", 00:13:21.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.077 "is_configured": false, 00:13:21.077 "data_offset": 0, 00:13:21.077 "data_size": 0 00:13:21.077 } 00:13:21.077 ] 00:13:21.077 }' 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.077 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.336 [2024-10-30 10:41:42.767502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.336 [2024-10-30 10:41:42.767561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.336 [2024-10-30 10:41:42.775453] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.336 [2024-10-30 10:41:42.775503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.336 [2024-10-30 10:41:42.775517] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.336 [2024-10-30 10:41:42.775532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.336 [2024-10-30 10:41:42.775542] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.336 [2024-10-30 10:41:42.775556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.336 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.594 [2024-10-30 10:41:42.821427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.594 BaseBdev1 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.594 [ 00:13:21.594 { 00:13:21.594 "name": "BaseBdev1", 00:13:21.594 "aliases": [ 00:13:21.594 "9224d101-43b3-4298-a05d-5d2be3b03cca" 00:13:21.594 ], 00:13:21.594 "product_name": "Malloc disk", 00:13:21.594 "block_size": 512, 00:13:21.594 "num_blocks": 65536, 00:13:21.594 "uuid": "9224d101-43b3-4298-a05d-5d2be3b03cca", 00:13:21.594 "assigned_rate_limits": { 00:13:21.594 "rw_ios_per_sec": 0, 00:13:21.594 "rw_mbytes_per_sec": 0, 00:13:21.594 "r_mbytes_per_sec": 0, 00:13:21.594 "w_mbytes_per_sec": 0 00:13:21.594 }, 00:13:21.594 "claimed": true, 00:13:21.594 "claim_type": "exclusive_write", 00:13:21.594 "zoned": false, 00:13:21.594 "supported_io_types": { 00:13:21.594 "read": true, 00:13:21.594 "write": true, 00:13:21.594 "unmap": true, 00:13:21.594 "flush": true, 00:13:21.594 "reset": true, 00:13:21.594 "nvme_admin": false, 00:13:21.594 "nvme_io": false, 00:13:21.594 "nvme_io_md": false, 00:13:21.594 "write_zeroes": true, 00:13:21.594 "zcopy": true, 00:13:21.594 "get_zone_info": false, 00:13:21.594 "zone_management": false, 00:13:21.594 "zone_append": false, 00:13:21.594 "compare": false, 00:13:21.594 "compare_and_write": false, 00:13:21.594 "abort": true, 00:13:21.594 "seek_hole": false, 00:13:21.594 "seek_data": false, 00:13:21.594 "copy": true, 00:13:21.594 "nvme_iov_md": false 00:13:21.594 }, 00:13:21.594 "memory_domains": [ 00:13:21.594 { 00:13:21.594 "dma_device_id": "system", 00:13:21.594 "dma_device_type": 1 00:13:21.594 }, 00:13:21.594 { 00:13:21.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.594 "dma_device_type": 2 00:13:21.594 } 00:13:21.594 ], 00:13:21.594 "driver_specific": {} 00:13:21.594 } 00:13:21.594 ] 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.594 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.594 "name": "Existed_Raid", 00:13:21.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.594 "strip_size_kb": 64, 00:13:21.594 "state": "configuring", 00:13:21.594 "raid_level": "concat", 00:13:21.594 "superblock": false, 00:13:21.594 "num_base_bdevs": 3, 00:13:21.594 "num_base_bdevs_discovered": 1, 00:13:21.594 "num_base_bdevs_operational": 3, 00:13:21.594 "base_bdevs_list": [ 00:13:21.594 { 00:13:21.595 "name": "BaseBdev1", 00:13:21.595 "uuid": "9224d101-43b3-4298-a05d-5d2be3b03cca", 00:13:21.595 "is_configured": true, 00:13:21.595 "data_offset": 0, 00:13:21.595 "data_size": 65536 00:13:21.595 }, 00:13:21.595 { 00:13:21.595 "name": "BaseBdev2", 00:13:21.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.595 "is_configured": false, 00:13:21.595 "data_offset": 0, 00:13:21.595 "data_size": 0 00:13:21.595 }, 00:13:21.595 { 00:13:21.595 "name": "BaseBdev3", 00:13:21.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.595 "is_configured": false, 00:13:21.595 "data_offset": 0, 00:13:21.595 "data_size": 0 00:13:21.595 } 00:13:21.595 ] 00:13:21.595 }' 00:13:21.595 10:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.595 10:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.161 [2024-10-30 10:41:43.365651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.161 [2024-10-30 10:41:43.365723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.161 [2024-10-30 10:41:43.377699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.161 [2024-10-30 10:41:43.380096] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.161 [2024-10-30 10:41:43.380155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.161 [2024-10-30 10:41:43.380171] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.161 [2024-10-30 10:41:43.380187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.161 "name": "Existed_Raid", 00:13:22.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.161 "strip_size_kb": 64, 00:13:22.161 "state": "configuring", 00:13:22.161 "raid_level": "concat", 00:13:22.161 "superblock": false, 00:13:22.161 "num_base_bdevs": 3, 00:13:22.161 "num_base_bdevs_discovered": 1, 00:13:22.161 "num_base_bdevs_operational": 3, 00:13:22.161 "base_bdevs_list": [ 00:13:22.161 { 00:13:22.161 "name": "BaseBdev1", 00:13:22.161 "uuid": "9224d101-43b3-4298-a05d-5d2be3b03cca", 00:13:22.161 "is_configured": true, 00:13:22.161 "data_offset": 0, 00:13:22.161 "data_size": 65536 00:13:22.161 }, 00:13:22.161 { 00:13:22.161 "name": "BaseBdev2", 00:13:22.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.161 "is_configured": false, 00:13:22.161 "data_offset": 0, 00:13:22.161 "data_size": 0 00:13:22.161 }, 00:13:22.161 { 00:13:22.161 "name": "BaseBdev3", 00:13:22.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.161 "is_configured": false, 00:13:22.161 "data_offset": 0, 00:13:22.161 "data_size": 0 00:13:22.161 } 00:13:22.161 ] 00:13:22.161 }' 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.161 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.419 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.419 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.419 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.676 [2024-10-30 10:41:43.900705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.676 BaseBdev2 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.676 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.677 [ 00:13:22.677 { 00:13:22.677 "name": "BaseBdev2", 00:13:22.677 "aliases": [ 00:13:22.677 "3707f9cf-51bf-4765-9dc7-3294443dc939" 00:13:22.677 ], 00:13:22.677 "product_name": "Malloc disk", 00:13:22.677 "block_size": 512, 00:13:22.677 "num_blocks": 65536, 00:13:22.677 "uuid": "3707f9cf-51bf-4765-9dc7-3294443dc939", 00:13:22.677 "assigned_rate_limits": { 00:13:22.677 "rw_ios_per_sec": 0, 00:13:22.677 "rw_mbytes_per_sec": 0, 00:13:22.677 "r_mbytes_per_sec": 0, 00:13:22.677 "w_mbytes_per_sec": 0 00:13:22.677 }, 00:13:22.677 "claimed": true, 00:13:22.677 "claim_type": "exclusive_write", 00:13:22.677 "zoned": false, 00:13:22.677 "supported_io_types": { 00:13:22.677 "read": true, 00:13:22.677 "write": true, 00:13:22.677 "unmap": true, 00:13:22.677 "flush": true, 00:13:22.677 "reset": true, 00:13:22.677 "nvme_admin": false, 00:13:22.677 "nvme_io": false, 00:13:22.677 "nvme_io_md": false, 00:13:22.677 "write_zeroes": true, 00:13:22.677 "zcopy": true, 00:13:22.677 "get_zone_info": false, 00:13:22.677 "zone_management": false, 00:13:22.677 "zone_append": false, 00:13:22.677 "compare": false, 00:13:22.677 "compare_and_write": false, 00:13:22.677 "abort": true, 00:13:22.677 "seek_hole": false, 00:13:22.677 "seek_data": false, 00:13:22.677 "copy": true, 00:13:22.677 "nvme_iov_md": false 00:13:22.677 }, 00:13:22.677 "memory_domains": [ 00:13:22.677 { 00:13:22.677 "dma_device_id": "system", 00:13:22.677 "dma_device_type": 1 00:13:22.677 }, 00:13:22.677 { 00:13:22.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.677 "dma_device_type": 2 00:13:22.677 } 00:13:22.677 ], 00:13:22.677 "driver_specific": {} 00:13:22.677 } 00:13:22.677 ] 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.677 "name": "Existed_Raid", 00:13:22.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.677 "strip_size_kb": 64, 00:13:22.677 "state": "configuring", 00:13:22.677 "raid_level": "concat", 00:13:22.677 "superblock": false, 00:13:22.677 "num_base_bdevs": 3, 00:13:22.677 "num_base_bdevs_discovered": 2, 00:13:22.677 "num_base_bdevs_operational": 3, 00:13:22.677 "base_bdevs_list": [ 00:13:22.677 { 00:13:22.677 "name": "BaseBdev1", 00:13:22.677 "uuid": "9224d101-43b3-4298-a05d-5d2be3b03cca", 00:13:22.677 "is_configured": true, 00:13:22.677 "data_offset": 0, 00:13:22.677 "data_size": 65536 00:13:22.677 }, 00:13:22.677 { 00:13:22.677 "name": "BaseBdev2", 00:13:22.677 "uuid": "3707f9cf-51bf-4765-9dc7-3294443dc939", 00:13:22.677 "is_configured": true, 00:13:22.677 "data_offset": 0, 00:13:22.677 "data_size": 65536 00:13:22.677 }, 00:13:22.677 { 00:13:22.677 "name": "BaseBdev3", 00:13:22.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.677 "is_configured": false, 00:13:22.677 "data_offset": 0, 00:13:22.677 "data_size": 0 00:13:22.677 } 00:13:22.677 ] 00:13:22.677 }' 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.677 10:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.243 [2024-10-30 10:41:44.467296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.243 [2024-10-30 10:41:44.467355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:23.243 [2024-10-30 10:41:44.467375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:23.243 [2024-10-30 10:41:44.467711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:23.243 [2024-10-30 10:41:44.467935] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:23.243 [2024-10-30 10:41:44.467961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:23.243 [2024-10-30 10:41:44.468291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.243 BaseBdev3 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:23.243 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.244 [ 00:13:23.244 { 00:13:23.244 "name": "BaseBdev3", 00:13:23.244 "aliases": [ 00:13:23.244 "88d8fef1-f62a-400c-83a9-f7064825447a" 00:13:23.244 ], 00:13:23.244 "product_name": "Malloc disk", 00:13:23.244 "block_size": 512, 00:13:23.244 "num_blocks": 65536, 00:13:23.244 "uuid": "88d8fef1-f62a-400c-83a9-f7064825447a", 00:13:23.244 "assigned_rate_limits": { 00:13:23.244 "rw_ios_per_sec": 0, 00:13:23.244 "rw_mbytes_per_sec": 0, 00:13:23.244 "r_mbytes_per_sec": 0, 00:13:23.244 "w_mbytes_per_sec": 0 00:13:23.244 }, 00:13:23.244 "claimed": true, 00:13:23.244 "claim_type": "exclusive_write", 00:13:23.244 "zoned": false, 00:13:23.244 "supported_io_types": { 00:13:23.244 "read": true, 00:13:23.244 "write": true, 00:13:23.244 "unmap": true, 00:13:23.244 "flush": true, 00:13:23.244 "reset": true, 00:13:23.244 "nvme_admin": false, 00:13:23.244 "nvme_io": false, 00:13:23.244 "nvme_io_md": false, 00:13:23.244 "write_zeroes": true, 00:13:23.244 "zcopy": true, 00:13:23.244 "get_zone_info": false, 00:13:23.244 "zone_management": false, 00:13:23.244 "zone_append": false, 00:13:23.244 "compare": false, 00:13:23.244 "compare_and_write": false, 00:13:23.244 "abort": true, 00:13:23.244 "seek_hole": false, 00:13:23.244 "seek_data": false, 00:13:23.244 "copy": true, 00:13:23.244 "nvme_iov_md": false 00:13:23.244 }, 00:13:23.244 "memory_domains": [ 00:13:23.244 { 00:13:23.244 "dma_device_id": "system", 00:13:23.244 "dma_device_type": 1 00:13:23.244 }, 00:13:23.244 { 00:13:23.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.244 "dma_device_type": 2 00:13:23.244 } 00:13:23.244 ], 00:13:23.244 "driver_specific": {} 00:13:23.244 } 00:13:23.244 ] 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.244 "name": "Existed_Raid", 00:13:23.244 "uuid": "92886852-5b89-4277-a65c-b8e9ec68593c", 00:13:23.244 "strip_size_kb": 64, 00:13:23.244 "state": "online", 00:13:23.244 "raid_level": "concat", 00:13:23.244 "superblock": false, 00:13:23.244 "num_base_bdevs": 3, 00:13:23.244 "num_base_bdevs_discovered": 3, 00:13:23.244 "num_base_bdevs_operational": 3, 00:13:23.244 "base_bdevs_list": [ 00:13:23.244 { 00:13:23.244 "name": "BaseBdev1", 00:13:23.244 "uuid": "9224d101-43b3-4298-a05d-5d2be3b03cca", 00:13:23.244 "is_configured": true, 00:13:23.244 "data_offset": 0, 00:13:23.244 "data_size": 65536 00:13:23.244 }, 00:13:23.244 { 00:13:23.244 "name": "BaseBdev2", 00:13:23.244 "uuid": "3707f9cf-51bf-4765-9dc7-3294443dc939", 00:13:23.244 "is_configured": true, 00:13:23.244 "data_offset": 0, 00:13:23.244 "data_size": 65536 00:13:23.244 }, 00:13:23.244 { 00:13:23.244 "name": "BaseBdev3", 00:13:23.244 "uuid": "88d8fef1-f62a-400c-83a9-f7064825447a", 00:13:23.244 "is_configured": true, 00:13:23.244 "data_offset": 0, 00:13:23.244 "data_size": 65536 00:13:23.244 } 00:13:23.244 ] 00:13:23.244 }' 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.244 10:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 [2024-10-30 10:41:45.019867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.810 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:23.810 "name": "Existed_Raid", 00:13:23.810 "aliases": [ 00:13:23.810 "92886852-5b89-4277-a65c-b8e9ec68593c" 00:13:23.810 ], 00:13:23.810 "product_name": "Raid Volume", 00:13:23.810 "block_size": 512, 00:13:23.810 "num_blocks": 196608, 00:13:23.810 "uuid": "92886852-5b89-4277-a65c-b8e9ec68593c", 00:13:23.810 "assigned_rate_limits": { 00:13:23.810 "rw_ios_per_sec": 0, 00:13:23.810 "rw_mbytes_per_sec": 0, 00:13:23.810 "r_mbytes_per_sec": 0, 00:13:23.810 "w_mbytes_per_sec": 0 00:13:23.810 }, 00:13:23.810 "claimed": false, 00:13:23.810 "zoned": false, 00:13:23.810 "supported_io_types": { 00:13:23.810 "read": true, 00:13:23.810 "write": true, 00:13:23.810 "unmap": true, 00:13:23.810 "flush": true, 00:13:23.810 "reset": true, 00:13:23.810 "nvme_admin": false, 00:13:23.810 "nvme_io": false, 00:13:23.810 "nvme_io_md": false, 00:13:23.810 "write_zeroes": true, 00:13:23.810 "zcopy": false, 00:13:23.810 "get_zone_info": false, 00:13:23.810 "zone_management": false, 00:13:23.810 "zone_append": false, 00:13:23.810 "compare": false, 00:13:23.810 "compare_and_write": false, 00:13:23.810 "abort": false, 00:13:23.810 "seek_hole": false, 00:13:23.810 "seek_data": false, 00:13:23.810 "copy": false, 00:13:23.810 "nvme_iov_md": false 00:13:23.810 }, 00:13:23.810 "memory_domains": [ 00:13:23.810 { 00:13:23.810 "dma_device_id": "system", 00:13:23.810 "dma_device_type": 1 00:13:23.810 }, 00:13:23.810 { 00:13:23.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.810 "dma_device_type": 2 00:13:23.810 }, 00:13:23.810 { 00:13:23.810 "dma_device_id": "system", 00:13:23.810 "dma_device_type": 1 00:13:23.810 }, 00:13:23.810 { 00:13:23.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.810 "dma_device_type": 2 00:13:23.810 }, 00:13:23.810 { 00:13:23.810 "dma_device_id": "system", 00:13:23.810 "dma_device_type": 1 00:13:23.810 }, 00:13:23.810 { 00:13:23.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.810 "dma_device_type": 2 00:13:23.810 } 00:13:23.810 ], 00:13:23.810 "driver_specific": { 00:13:23.810 "raid": { 00:13:23.810 "uuid": "92886852-5b89-4277-a65c-b8e9ec68593c", 00:13:23.810 "strip_size_kb": 64, 00:13:23.810 "state": "online", 00:13:23.810 "raid_level": "concat", 00:13:23.810 "superblock": false, 00:13:23.810 "num_base_bdevs": 3, 00:13:23.810 "num_base_bdevs_discovered": 3, 00:13:23.810 "num_base_bdevs_operational": 3, 00:13:23.810 "base_bdevs_list": [ 00:13:23.810 { 00:13:23.810 "name": "BaseBdev1", 00:13:23.810 "uuid": "9224d101-43b3-4298-a05d-5d2be3b03cca", 00:13:23.810 "is_configured": true, 00:13:23.810 "data_offset": 0, 00:13:23.810 "data_size": 65536 00:13:23.810 }, 00:13:23.810 { 00:13:23.810 "name": "BaseBdev2", 00:13:23.810 "uuid": "3707f9cf-51bf-4765-9dc7-3294443dc939", 00:13:23.810 "is_configured": true, 00:13:23.811 "data_offset": 0, 00:13:23.811 "data_size": 65536 00:13:23.811 }, 00:13:23.811 { 00:13:23.811 "name": "BaseBdev3", 00:13:23.811 "uuid": "88d8fef1-f62a-400c-83a9-f7064825447a", 00:13:23.811 "is_configured": true, 00:13:23.811 "data_offset": 0, 00:13:23.811 "data_size": 65536 00:13:23.811 } 00:13:23.811 ] 00:13:23.811 } 00:13:23.811 } 00:13:23.811 }' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:23.811 BaseBdev2 00:13:23.811 BaseBdev3' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.811 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.069 [2024-10-30 10:41:45.307681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.069 [2024-10-30 10:41:45.307720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.069 [2024-10-30 10:41:45.307791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.069 "name": "Existed_Raid", 00:13:24.069 "uuid": "92886852-5b89-4277-a65c-b8e9ec68593c", 00:13:24.069 "strip_size_kb": 64, 00:13:24.069 "state": "offline", 00:13:24.069 "raid_level": "concat", 00:13:24.069 "superblock": false, 00:13:24.069 "num_base_bdevs": 3, 00:13:24.069 "num_base_bdevs_discovered": 2, 00:13:24.069 "num_base_bdevs_operational": 2, 00:13:24.069 "base_bdevs_list": [ 00:13:24.069 { 00:13:24.069 "name": null, 00:13:24.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.069 "is_configured": false, 00:13:24.069 "data_offset": 0, 00:13:24.069 "data_size": 65536 00:13:24.069 }, 00:13:24.069 { 00:13:24.069 "name": "BaseBdev2", 00:13:24.069 "uuid": "3707f9cf-51bf-4765-9dc7-3294443dc939", 00:13:24.069 "is_configured": true, 00:13:24.069 "data_offset": 0, 00:13:24.069 "data_size": 65536 00:13:24.069 }, 00:13:24.069 { 00:13:24.069 "name": "BaseBdev3", 00:13:24.069 "uuid": "88d8fef1-f62a-400c-83a9-f7064825447a", 00:13:24.069 "is_configured": true, 00:13:24.069 "data_offset": 0, 00:13:24.069 "data_size": 65536 00:13:24.069 } 00:13:24.069 ] 00:13:24.069 }' 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.069 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.636 10:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.636 [2024-10-30 10:41:45.939327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.636 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.636 [2024-10-30 10:41:46.086799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:24.636 [2024-10-30 10:41:46.086882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.893 BaseBdev2 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.893 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.894 [ 00:13:24.894 { 00:13:24.894 "name": "BaseBdev2", 00:13:24.894 "aliases": [ 00:13:24.894 "266560a7-d599-4ec0-a6aa-ac8fca1db896" 00:13:24.894 ], 00:13:24.894 "product_name": "Malloc disk", 00:13:24.894 "block_size": 512, 00:13:24.894 "num_blocks": 65536, 00:13:24.894 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:24.894 "assigned_rate_limits": { 00:13:24.894 "rw_ios_per_sec": 0, 00:13:24.894 "rw_mbytes_per_sec": 0, 00:13:24.894 "r_mbytes_per_sec": 0, 00:13:24.894 "w_mbytes_per_sec": 0 00:13:24.894 }, 00:13:24.894 "claimed": false, 00:13:24.894 "zoned": false, 00:13:24.894 "supported_io_types": { 00:13:24.894 "read": true, 00:13:24.894 "write": true, 00:13:24.894 "unmap": true, 00:13:24.894 "flush": true, 00:13:24.894 "reset": true, 00:13:24.894 "nvme_admin": false, 00:13:24.894 "nvme_io": false, 00:13:24.894 "nvme_io_md": false, 00:13:24.894 "write_zeroes": true, 00:13:24.894 "zcopy": true, 00:13:24.894 "get_zone_info": false, 00:13:24.894 "zone_management": false, 00:13:24.894 "zone_append": false, 00:13:24.894 "compare": false, 00:13:24.894 "compare_and_write": false, 00:13:24.894 "abort": true, 00:13:24.894 "seek_hole": false, 00:13:24.894 "seek_data": false, 00:13:24.894 "copy": true, 00:13:24.894 "nvme_iov_md": false 00:13:24.894 }, 00:13:24.894 "memory_domains": [ 00:13:24.894 { 00:13:24.894 "dma_device_id": "system", 00:13:24.894 "dma_device_type": 1 00:13:24.894 }, 00:13:24.894 { 00:13:24.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.894 "dma_device_type": 2 00:13:24.894 } 00:13:24.894 ], 00:13:24.894 "driver_specific": {} 00:13:24.894 } 00:13:24.894 ] 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.894 BaseBdev3 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.894 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.894 [ 00:13:24.894 { 00:13:24.894 "name": "BaseBdev3", 00:13:24.894 "aliases": [ 00:13:24.894 "49416a1b-1e01-4efe-9d63-03264c30c6b4" 00:13:24.894 ], 00:13:24.894 "product_name": "Malloc disk", 00:13:24.894 "block_size": 512, 00:13:24.894 "num_blocks": 65536, 00:13:24.894 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:24.894 "assigned_rate_limits": { 00:13:24.894 "rw_ios_per_sec": 0, 00:13:24.894 "rw_mbytes_per_sec": 0, 00:13:25.151 "r_mbytes_per_sec": 0, 00:13:25.151 "w_mbytes_per_sec": 0 00:13:25.151 }, 00:13:25.151 "claimed": false, 00:13:25.151 "zoned": false, 00:13:25.151 "supported_io_types": { 00:13:25.151 "read": true, 00:13:25.151 "write": true, 00:13:25.151 "unmap": true, 00:13:25.151 "flush": true, 00:13:25.151 "reset": true, 00:13:25.151 "nvme_admin": false, 00:13:25.151 "nvme_io": false, 00:13:25.151 "nvme_io_md": false, 00:13:25.151 "write_zeroes": true, 00:13:25.151 "zcopy": true, 00:13:25.151 "get_zone_info": false, 00:13:25.151 "zone_management": false, 00:13:25.151 "zone_append": false, 00:13:25.151 "compare": false, 00:13:25.151 "compare_and_write": false, 00:13:25.151 "abort": true, 00:13:25.151 "seek_hole": false, 00:13:25.151 "seek_data": false, 00:13:25.151 "copy": true, 00:13:25.151 "nvme_iov_md": false 00:13:25.151 }, 00:13:25.151 "memory_domains": [ 00:13:25.151 { 00:13:25.151 "dma_device_id": "system", 00:13:25.151 "dma_device_type": 1 00:13:25.151 }, 00:13:25.151 { 00:13:25.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.151 "dma_device_type": 2 00:13:25.151 } 00:13:25.151 ], 00:13:25.151 "driver_specific": {} 00:13:25.151 } 00:13:25.151 ] 00:13:25.151 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.152 [2024-10-30 10:41:46.375380] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.152 [2024-10-30 10:41:46.375436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.152 [2024-10-30 10:41:46.375467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.152 [2024-10-30 10:41:46.377831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.152 "name": "Existed_Raid", 00:13:25.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.152 "strip_size_kb": 64, 00:13:25.152 "state": "configuring", 00:13:25.152 "raid_level": "concat", 00:13:25.152 "superblock": false, 00:13:25.152 "num_base_bdevs": 3, 00:13:25.152 "num_base_bdevs_discovered": 2, 00:13:25.152 "num_base_bdevs_operational": 3, 00:13:25.152 "base_bdevs_list": [ 00:13:25.152 { 00:13:25.152 "name": "BaseBdev1", 00:13:25.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.152 "is_configured": false, 00:13:25.152 "data_offset": 0, 00:13:25.152 "data_size": 0 00:13:25.152 }, 00:13:25.152 { 00:13:25.152 "name": "BaseBdev2", 00:13:25.152 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:25.152 "is_configured": true, 00:13:25.152 "data_offset": 0, 00:13:25.152 "data_size": 65536 00:13:25.152 }, 00:13:25.152 { 00:13:25.152 "name": "BaseBdev3", 00:13:25.152 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:25.152 "is_configured": true, 00:13:25.152 "data_offset": 0, 00:13:25.152 "data_size": 65536 00:13:25.152 } 00:13:25.152 ] 00:13:25.152 }' 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.152 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.718 [2024-10-30 10:41:46.931539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.718 "name": "Existed_Raid", 00:13:25.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.718 "strip_size_kb": 64, 00:13:25.718 "state": "configuring", 00:13:25.718 "raid_level": "concat", 00:13:25.718 "superblock": false, 00:13:25.718 "num_base_bdevs": 3, 00:13:25.718 "num_base_bdevs_discovered": 1, 00:13:25.718 "num_base_bdevs_operational": 3, 00:13:25.718 "base_bdevs_list": [ 00:13:25.718 { 00:13:25.718 "name": "BaseBdev1", 00:13:25.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.718 "is_configured": false, 00:13:25.718 "data_offset": 0, 00:13:25.718 "data_size": 0 00:13:25.718 }, 00:13:25.718 { 00:13:25.718 "name": null, 00:13:25.718 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:25.718 "is_configured": false, 00:13:25.718 "data_offset": 0, 00:13:25.718 "data_size": 65536 00:13:25.718 }, 00:13:25.718 { 00:13:25.718 "name": "BaseBdev3", 00:13:25.718 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:25.718 "is_configured": true, 00:13:25.718 "data_offset": 0, 00:13:25.718 "data_size": 65536 00:13:25.718 } 00:13:25.718 ] 00:13:25.718 }' 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.718 10:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.976 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.976 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:25.976 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.976 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.976 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.235 [2024-10-30 10:41:47.509791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.235 BaseBdev1 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.235 [ 00:13:26.235 { 00:13:26.235 "name": "BaseBdev1", 00:13:26.235 "aliases": [ 00:13:26.235 "166f658d-1ad5-4133-91cf-dfcd5879c5c6" 00:13:26.235 ], 00:13:26.235 "product_name": "Malloc disk", 00:13:26.235 "block_size": 512, 00:13:26.235 "num_blocks": 65536, 00:13:26.235 "uuid": "166f658d-1ad5-4133-91cf-dfcd5879c5c6", 00:13:26.235 "assigned_rate_limits": { 00:13:26.235 "rw_ios_per_sec": 0, 00:13:26.235 "rw_mbytes_per_sec": 0, 00:13:26.235 "r_mbytes_per_sec": 0, 00:13:26.235 "w_mbytes_per_sec": 0 00:13:26.235 }, 00:13:26.235 "claimed": true, 00:13:26.235 "claim_type": "exclusive_write", 00:13:26.235 "zoned": false, 00:13:26.235 "supported_io_types": { 00:13:26.235 "read": true, 00:13:26.235 "write": true, 00:13:26.235 "unmap": true, 00:13:26.235 "flush": true, 00:13:26.235 "reset": true, 00:13:26.235 "nvme_admin": false, 00:13:26.235 "nvme_io": false, 00:13:26.235 "nvme_io_md": false, 00:13:26.235 "write_zeroes": true, 00:13:26.235 "zcopy": true, 00:13:26.235 "get_zone_info": false, 00:13:26.235 "zone_management": false, 00:13:26.235 "zone_append": false, 00:13:26.235 "compare": false, 00:13:26.235 "compare_and_write": false, 00:13:26.235 "abort": true, 00:13:26.235 "seek_hole": false, 00:13:26.235 "seek_data": false, 00:13:26.235 "copy": true, 00:13:26.235 "nvme_iov_md": false 00:13:26.235 }, 00:13:26.235 "memory_domains": [ 00:13:26.235 { 00:13:26.235 "dma_device_id": "system", 00:13:26.235 "dma_device_type": 1 00:13:26.235 }, 00:13:26.235 { 00:13:26.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.235 "dma_device_type": 2 00:13:26.235 } 00:13:26.235 ], 00:13:26.235 "driver_specific": {} 00:13:26.235 } 00:13:26.235 ] 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.235 "name": "Existed_Raid", 00:13:26.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.235 "strip_size_kb": 64, 00:13:26.235 "state": "configuring", 00:13:26.235 "raid_level": "concat", 00:13:26.235 "superblock": false, 00:13:26.235 "num_base_bdevs": 3, 00:13:26.235 "num_base_bdevs_discovered": 2, 00:13:26.235 "num_base_bdevs_operational": 3, 00:13:26.235 "base_bdevs_list": [ 00:13:26.235 { 00:13:26.235 "name": "BaseBdev1", 00:13:26.235 "uuid": "166f658d-1ad5-4133-91cf-dfcd5879c5c6", 00:13:26.235 "is_configured": true, 00:13:26.235 "data_offset": 0, 00:13:26.235 "data_size": 65536 00:13:26.235 }, 00:13:26.235 { 00:13:26.235 "name": null, 00:13:26.235 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:26.235 "is_configured": false, 00:13:26.235 "data_offset": 0, 00:13:26.235 "data_size": 65536 00:13:26.235 }, 00:13:26.235 { 00:13:26.235 "name": "BaseBdev3", 00:13:26.235 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:26.235 "is_configured": true, 00:13:26.235 "data_offset": 0, 00:13:26.235 "data_size": 65536 00:13:26.235 } 00:13:26.235 ] 00:13:26.235 }' 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.235 10:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.802 [2024-10-30 10:41:48.094005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:26.802 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.803 "name": "Existed_Raid", 00:13:26.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.803 "strip_size_kb": 64, 00:13:26.803 "state": "configuring", 00:13:26.803 "raid_level": "concat", 00:13:26.803 "superblock": false, 00:13:26.803 "num_base_bdevs": 3, 00:13:26.803 "num_base_bdevs_discovered": 1, 00:13:26.803 "num_base_bdevs_operational": 3, 00:13:26.803 "base_bdevs_list": [ 00:13:26.803 { 00:13:26.803 "name": "BaseBdev1", 00:13:26.803 "uuid": "166f658d-1ad5-4133-91cf-dfcd5879c5c6", 00:13:26.803 "is_configured": true, 00:13:26.803 "data_offset": 0, 00:13:26.803 "data_size": 65536 00:13:26.803 }, 00:13:26.803 { 00:13:26.803 "name": null, 00:13:26.803 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:26.803 "is_configured": false, 00:13:26.803 "data_offset": 0, 00:13:26.803 "data_size": 65536 00:13:26.803 }, 00:13:26.803 { 00:13:26.803 "name": null, 00:13:26.803 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:26.803 "is_configured": false, 00:13:26.803 "data_offset": 0, 00:13:26.803 "data_size": 65536 00:13:26.803 } 00:13:26.803 ] 00:13:26.803 }' 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.803 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.370 [2024-10-30 10:41:48.658173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.370 "name": "Existed_Raid", 00:13:27.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.370 "strip_size_kb": 64, 00:13:27.370 "state": "configuring", 00:13:27.370 "raid_level": "concat", 00:13:27.370 "superblock": false, 00:13:27.370 "num_base_bdevs": 3, 00:13:27.370 "num_base_bdevs_discovered": 2, 00:13:27.370 "num_base_bdevs_operational": 3, 00:13:27.370 "base_bdevs_list": [ 00:13:27.370 { 00:13:27.370 "name": "BaseBdev1", 00:13:27.370 "uuid": "166f658d-1ad5-4133-91cf-dfcd5879c5c6", 00:13:27.370 "is_configured": true, 00:13:27.370 "data_offset": 0, 00:13:27.370 "data_size": 65536 00:13:27.370 }, 00:13:27.370 { 00:13:27.370 "name": null, 00:13:27.370 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:27.370 "is_configured": false, 00:13:27.370 "data_offset": 0, 00:13:27.370 "data_size": 65536 00:13:27.370 }, 00:13:27.370 { 00:13:27.370 "name": "BaseBdev3", 00:13:27.370 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:27.370 "is_configured": true, 00:13:27.370 "data_offset": 0, 00:13:27.370 "data_size": 65536 00:13:27.370 } 00:13:27.370 ] 00:13:27.370 }' 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.370 10:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.937 [2024-10-30 10:41:49.194345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.937 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.938 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.938 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.938 "name": "Existed_Raid", 00:13:27.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.938 "strip_size_kb": 64, 00:13:27.938 "state": "configuring", 00:13:27.938 "raid_level": "concat", 00:13:27.938 "superblock": false, 00:13:27.938 "num_base_bdevs": 3, 00:13:27.938 "num_base_bdevs_discovered": 1, 00:13:27.938 "num_base_bdevs_operational": 3, 00:13:27.938 "base_bdevs_list": [ 00:13:27.938 { 00:13:27.938 "name": null, 00:13:27.938 "uuid": "166f658d-1ad5-4133-91cf-dfcd5879c5c6", 00:13:27.938 "is_configured": false, 00:13:27.938 "data_offset": 0, 00:13:27.938 "data_size": 65536 00:13:27.938 }, 00:13:27.938 { 00:13:27.938 "name": null, 00:13:27.938 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:27.938 "is_configured": false, 00:13:27.938 "data_offset": 0, 00:13:27.938 "data_size": 65536 00:13:27.938 }, 00:13:27.938 { 00:13:27.938 "name": "BaseBdev3", 00:13:27.938 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:27.938 "is_configured": true, 00:13:27.938 "data_offset": 0, 00:13:27.938 "data_size": 65536 00:13:27.938 } 00:13:27.938 ] 00:13:27.938 }' 00:13:27.938 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.938 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.504 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:28.504 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.504 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.504 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.504 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.504 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:28.504 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:28.504 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.504 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.505 [2024-10-30 10:41:49.835348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.505 "name": "Existed_Raid", 00:13:28.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.505 "strip_size_kb": 64, 00:13:28.505 "state": "configuring", 00:13:28.505 "raid_level": "concat", 00:13:28.505 "superblock": false, 00:13:28.505 "num_base_bdevs": 3, 00:13:28.505 "num_base_bdevs_discovered": 2, 00:13:28.505 "num_base_bdevs_operational": 3, 00:13:28.505 "base_bdevs_list": [ 00:13:28.505 { 00:13:28.505 "name": null, 00:13:28.505 "uuid": "166f658d-1ad5-4133-91cf-dfcd5879c5c6", 00:13:28.505 "is_configured": false, 00:13:28.505 "data_offset": 0, 00:13:28.505 "data_size": 65536 00:13:28.505 }, 00:13:28.505 { 00:13:28.505 "name": "BaseBdev2", 00:13:28.505 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:28.505 "is_configured": true, 00:13:28.505 "data_offset": 0, 00:13:28.505 "data_size": 65536 00:13:28.505 }, 00:13:28.505 { 00:13:28.505 "name": "BaseBdev3", 00:13:28.505 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:28.505 "is_configured": true, 00:13:28.505 "data_offset": 0, 00:13:28.505 "data_size": 65536 00:13:28.505 } 00:13:28.505 ] 00:13:28.505 }' 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.505 10:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 166f658d-1ad5-4133-91cf-dfcd5879c5c6 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 [2024-10-30 10:41:50.493449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:29.134 [2024-10-30 10:41:50.493497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:29.134 [2024-10-30 10:41:50.493512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:29.134 [2024-10-30 10:41:50.493832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:29.134 [2024-10-30 10:41:50.494052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:29.134 [2024-10-30 10:41:50.494070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:29.134 [2024-10-30 10:41:50.494358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.134 NewBaseBdev 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 [ 00:13:29.134 { 00:13:29.134 "name": "NewBaseBdev", 00:13:29.134 "aliases": [ 00:13:29.134 "166f658d-1ad5-4133-91cf-dfcd5879c5c6" 00:13:29.134 ], 00:13:29.134 "product_name": "Malloc disk", 00:13:29.134 "block_size": 512, 00:13:29.134 "num_blocks": 65536, 00:13:29.134 "uuid": "166f658d-1ad5-4133-91cf-dfcd5879c5c6", 00:13:29.134 "assigned_rate_limits": { 00:13:29.134 "rw_ios_per_sec": 0, 00:13:29.134 "rw_mbytes_per_sec": 0, 00:13:29.134 "r_mbytes_per_sec": 0, 00:13:29.134 "w_mbytes_per_sec": 0 00:13:29.134 }, 00:13:29.134 "claimed": true, 00:13:29.134 "claim_type": "exclusive_write", 00:13:29.134 "zoned": false, 00:13:29.134 "supported_io_types": { 00:13:29.134 "read": true, 00:13:29.134 "write": true, 00:13:29.134 "unmap": true, 00:13:29.134 "flush": true, 00:13:29.134 "reset": true, 00:13:29.134 "nvme_admin": false, 00:13:29.134 "nvme_io": false, 00:13:29.134 "nvme_io_md": false, 00:13:29.134 "write_zeroes": true, 00:13:29.134 "zcopy": true, 00:13:29.134 "get_zone_info": false, 00:13:29.134 "zone_management": false, 00:13:29.134 "zone_append": false, 00:13:29.134 "compare": false, 00:13:29.134 "compare_and_write": false, 00:13:29.134 "abort": true, 00:13:29.134 "seek_hole": false, 00:13:29.134 "seek_data": false, 00:13:29.134 "copy": true, 00:13:29.134 "nvme_iov_md": false 00:13:29.134 }, 00:13:29.134 "memory_domains": [ 00:13:29.134 { 00:13:29.134 "dma_device_id": "system", 00:13:29.134 "dma_device_type": 1 00:13:29.134 }, 00:13:29.134 { 00:13:29.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.134 "dma_device_type": 2 00:13:29.134 } 00:13:29.134 ], 00:13:29.134 "driver_specific": {} 00:13:29.134 } 00:13:29.134 ] 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.134 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.134 "name": "Existed_Raid", 00:13:29.134 "uuid": "bb22d630-be75-4f89-8283-d19b9f54d509", 00:13:29.134 "strip_size_kb": 64, 00:13:29.134 "state": "online", 00:13:29.134 "raid_level": "concat", 00:13:29.134 "superblock": false, 00:13:29.134 "num_base_bdevs": 3, 00:13:29.134 "num_base_bdevs_discovered": 3, 00:13:29.134 "num_base_bdevs_operational": 3, 00:13:29.134 "base_bdevs_list": [ 00:13:29.134 { 00:13:29.134 "name": "NewBaseBdev", 00:13:29.134 "uuid": "166f658d-1ad5-4133-91cf-dfcd5879c5c6", 00:13:29.134 "is_configured": true, 00:13:29.134 "data_offset": 0, 00:13:29.135 "data_size": 65536 00:13:29.135 }, 00:13:29.135 { 00:13:29.135 "name": "BaseBdev2", 00:13:29.135 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:29.135 "is_configured": true, 00:13:29.135 "data_offset": 0, 00:13:29.135 "data_size": 65536 00:13:29.135 }, 00:13:29.135 { 00:13:29.135 "name": "BaseBdev3", 00:13:29.135 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:29.135 "is_configured": true, 00:13:29.135 "data_offset": 0, 00:13:29.135 "data_size": 65536 00:13:29.135 } 00:13:29.135 ] 00:13:29.135 }' 00:13:29.135 10:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.135 10:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.702 [2024-10-30 10:41:51.070091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:29.702 "name": "Existed_Raid", 00:13:29.702 "aliases": [ 00:13:29.702 "bb22d630-be75-4f89-8283-d19b9f54d509" 00:13:29.702 ], 00:13:29.702 "product_name": "Raid Volume", 00:13:29.702 "block_size": 512, 00:13:29.702 "num_blocks": 196608, 00:13:29.702 "uuid": "bb22d630-be75-4f89-8283-d19b9f54d509", 00:13:29.702 "assigned_rate_limits": { 00:13:29.702 "rw_ios_per_sec": 0, 00:13:29.702 "rw_mbytes_per_sec": 0, 00:13:29.702 "r_mbytes_per_sec": 0, 00:13:29.702 "w_mbytes_per_sec": 0 00:13:29.702 }, 00:13:29.702 "claimed": false, 00:13:29.702 "zoned": false, 00:13:29.702 "supported_io_types": { 00:13:29.702 "read": true, 00:13:29.702 "write": true, 00:13:29.702 "unmap": true, 00:13:29.702 "flush": true, 00:13:29.702 "reset": true, 00:13:29.702 "nvme_admin": false, 00:13:29.702 "nvme_io": false, 00:13:29.702 "nvme_io_md": false, 00:13:29.702 "write_zeroes": true, 00:13:29.702 "zcopy": false, 00:13:29.702 "get_zone_info": false, 00:13:29.702 "zone_management": false, 00:13:29.702 "zone_append": false, 00:13:29.702 "compare": false, 00:13:29.702 "compare_and_write": false, 00:13:29.702 "abort": false, 00:13:29.702 "seek_hole": false, 00:13:29.702 "seek_data": false, 00:13:29.702 "copy": false, 00:13:29.702 "nvme_iov_md": false 00:13:29.702 }, 00:13:29.702 "memory_domains": [ 00:13:29.702 { 00:13:29.702 "dma_device_id": "system", 00:13:29.702 "dma_device_type": 1 00:13:29.702 }, 00:13:29.702 { 00:13:29.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.702 "dma_device_type": 2 00:13:29.702 }, 00:13:29.702 { 00:13:29.702 "dma_device_id": "system", 00:13:29.702 "dma_device_type": 1 00:13:29.702 }, 00:13:29.702 { 00:13:29.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.702 "dma_device_type": 2 00:13:29.702 }, 00:13:29.702 { 00:13:29.702 "dma_device_id": "system", 00:13:29.702 "dma_device_type": 1 00:13:29.702 }, 00:13:29.702 { 00:13:29.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.702 "dma_device_type": 2 00:13:29.702 } 00:13:29.702 ], 00:13:29.702 "driver_specific": { 00:13:29.702 "raid": { 00:13:29.702 "uuid": "bb22d630-be75-4f89-8283-d19b9f54d509", 00:13:29.702 "strip_size_kb": 64, 00:13:29.702 "state": "online", 00:13:29.702 "raid_level": "concat", 00:13:29.702 "superblock": false, 00:13:29.702 "num_base_bdevs": 3, 00:13:29.702 "num_base_bdevs_discovered": 3, 00:13:29.702 "num_base_bdevs_operational": 3, 00:13:29.702 "base_bdevs_list": [ 00:13:29.702 { 00:13:29.702 "name": "NewBaseBdev", 00:13:29.702 "uuid": "166f658d-1ad5-4133-91cf-dfcd5879c5c6", 00:13:29.702 "is_configured": true, 00:13:29.702 "data_offset": 0, 00:13:29.702 "data_size": 65536 00:13:29.702 }, 00:13:29.702 { 00:13:29.702 "name": "BaseBdev2", 00:13:29.702 "uuid": "266560a7-d599-4ec0-a6aa-ac8fca1db896", 00:13:29.702 "is_configured": true, 00:13:29.702 "data_offset": 0, 00:13:29.702 "data_size": 65536 00:13:29.702 }, 00:13:29.702 { 00:13:29.702 "name": "BaseBdev3", 00:13:29.702 "uuid": "49416a1b-1e01-4efe-9d63-03264c30c6b4", 00:13:29.702 "is_configured": true, 00:13:29.702 "data_offset": 0, 00:13:29.702 "data_size": 65536 00:13:29.702 } 00:13:29.702 ] 00:13:29.702 } 00:13:29.702 } 00:13:29.702 }' 00:13:29.702 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:29.961 BaseBdev2 00:13:29.961 BaseBdev3' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.961 [2024-10-30 10:41:51.393761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.961 [2024-10-30 10:41:51.393793] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.961 [2024-10-30 10:41:51.393880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.961 [2024-10-30 10:41:51.393951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.961 [2024-10-30 10:41:51.393972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65804 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65804 ']' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65804 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:29.961 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65804 00:13:30.220 killing process with pid 65804 00:13:30.220 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:30.220 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:30.220 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65804' 00:13:30.220 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65804 00:13:30.220 [2024-10-30 10:41:51.431125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.220 10:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65804 00:13:30.479 [2024-10-30 10:41:51.691768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:31.415 00:13:31.415 real 0m11.507s 00:13:31.415 user 0m19.140s 00:13:31.415 sys 0m1.524s 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.415 ************************************ 00:13:31.415 END TEST raid_state_function_test 00:13:31.415 ************************************ 00:13:31.415 10:41:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:13:31.415 10:41:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:31.415 10:41:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:31.415 10:41:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.415 ************************************ 00:13:31.415 START TEST raid_state_function_test_sb 00:13:31.415 ************************************ 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:31.415 Process raid pid: 66436 00:13:31.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66436 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66436' 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66436 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66436 ']' 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:31.415 10:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.415 [2024-10-30 10:41:52.860501] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:13:31.415 [2024-10-30 10:41:52.860950] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.678 [2024-10-30 10:41:53.049668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.938 [2024-10-30 10:41:53.216490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.196 [2024-10-30 10:41:53.437542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.196 [2024-10-30 10:41:53.437758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.455 [2024-10-30 10:41:53.886759] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:32.455 [2024-10-30 10:41:53.886827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:32.455 [2024-10-30 10:41:53.886845] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:32.455 [2024-10-30 10:41:53.886862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:32.455 [2024-10-30 10:41:53.886873] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:32.455 [2024-10-30 10:41:53.886888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.455 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.713 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.713 "name": "Existed_Raid", 00:13:32.713 "uuid": "419cb938-beed-48f8-aba9-19164ed88855", 00:13:32.714 "strip_size_kb": 64, 00:13:32.714 "state": "configuring", 00:13:32.714 "raid_level": "concat", 00:13:32.714 "superblock": true, 00:13:32.714 "num_base_bdevs": 3, 00:13:32.714 "num_base_bdevs_discovered": 0, 00:13:32.714 "num_base_bdevs_operational": 3, 00:13:32.714 "base_bdevs_list": [ 00:13:32.714 { 00:13:32.714 "name": "BaseBdev1", 00:13:32.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.714 "is_configured": false, 00:13:32.714 "data_offset": 0, 00:13:32.714 "data_size": 0 00:13:32.714 }, 00:13:32.714 { 00:13:32.714 "name": "BaseBdev2", 00:13:32.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.714 "is_configured": false, 00:13:32.714 "data_offset": 0, 00:13:32.714 "data_size": 0 00:13:32.714 }, 00:13:32.714 { 00:13:32.714 "name": "BaseBdev3", 00:13:32.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.714 "is_configured": false, 00:13:32.714 "data_offset": 0, 00:13:32.714 "data_size": 0 00:13:32.714 } 00:13:32.714 ] 00:13:32.714 }' 00:13:32.714 10:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.714 10:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.972 [2024-10-30 10:41:54.398833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:32.972 [2024-10-30 10:41:54.398877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.972 [2024-10-30 10:41:54.406841] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:32.972 [2024-10-30 10:41:54.406897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:32.972 [2024-10-30 10:41:54.406913] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:32.972 [2024-10-30 10:41:54.406929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:32.972 [2024-10-30 10:41:54.406938] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:32.972 [2024-10-30 10:41:54.406962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.972 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.232 [2024-10-30 10:41:54.452879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.232 BaseBdev1 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.232 [ 00:13:33.232 { 00:13:33.232 "name": "BaseBdev1", 00:13:33.232 "aliases": [ 00:13:33.232 "80a4707d-bf72-4c01-a29c-d0f8ad8240ad" 00:13:33.232 ], 00:13:33.232 "product_name": "Malloc disk", 00:13:33.232 "block_size": 512, 00:13:33.232 "num_blocks": 65536, 00:13:33.232 "uuid": "80a4707d-bf72-4c01-a29c-d0f8ad8240ad", 00:13:33.232 "assigned_rate_limits": { 00:13:33.232 "rw_ios_per_sec": 0, 00:13:33.232 "rw_mbytes_per_sec": 0, 00:13:33.232 "r_mbytes_per_sec": 0, 00:13:33.232 "w_mbytes_per_sec": 0 00:13:33.232 }, 00:13:33.232 "claimed": true, 00:13:33.232 "claim_type": "exclusive_write", 00:13:33.232 "zoned": false, 00:13:33.232 "supported_io_types": { 00:13:33.232 "read": true, 00:13:33.232 "write": true, 00:13:33.232 "unmap": true, 00:13:33.232 "flush": true, 00:13:33.232 "reset": true, 00:13:33.232 "nvme_admin": false, 00:13:33.232 "nvme_io": false, 00:13:33.232 "nvme_io_md": false, 00:13:33.232 "write_zeroes": true, 00:13:33.232 "zcopy": true, 00:13:33.232 "get_zone_info": false, 00:13:33.232 "zone_management": false, 00:13:33.232 "zone_append": false, 00:13:33.232 "compare": false, 00:13:33.232 "compare_and_write": false, 00:13:33.232 "abort": true, 00:13:33.232 "seek_hole": false, 00:13:33.232 "seek_data": false, 00:13:33.232 "copy": true, 00:13:33.232 "nvme_iov_md": false 00:13:33.232 }, 00:13:33.232 "memory_domains": [ 00:13:33.232 { 00:13:33.232 "dma_device_id": "system", 00:13:33.232 "dma_device_type": 1 00:13:33.232 }, 00:13:33.232 { 00:13:33.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.232 "dma_device_type": 2 00:13:33.232 } 00:13:33.232 ], 00:13:33.232 "driver_specific": {} 00:13:33.232 } 00:13:33.232 ] 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.232 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.232 "name": "Existed_Raid", 00:13:33.232 "uuid": "7ea76e21-b5e4-4c59-a1cf-7a6932e13d72", 00:13:33.232 "strip_size_kb": 64, 00:13:33.232 "state": "configuring", 00:13:33.232 "raid_level": "concat", 00:13:33.232 "superblock": true, 00:13:33.232 "num_base_bdevs": 3, 00:13:33.232 "num_base_bdevs_discovered": 1, 00:13:33.232 "num_base_bdevs_operational": 3, 00:13:33.232 "base_bdevs_list": [ 00:13:33.232 { 00:13:33.232 "name": "BaseBdev1", 00:13:33.232 "uuid": "80a4707d-bf72-4c01-a29c-d0f8ad8240ad", 00:13:33.232 "is_configured": true, 00:13:33.232 "data_offset": 2048, 00:13:33.232 "data_size": 63488 00:13:33.232 }, 00:13:33.232 { 00:13:33.232 "name": "BaseBdev2", 00:13:33.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.232 "is_configured": false, 00:13:33.232 "data_offset": 0, 00:13:33.232 "data_size": 0 00:13:33.232 }, 00:13:33.232 { 00:13:33.232 "name": "BaseBdev3", 00:13:33.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.233 "is_configured": false, 00:13:33.233 "data_offset": 0, 00:13:33.233 "data_size": 0 00:13:33.233 } 00:13:33.233 ] 00:13:33.233 }' 00:13:33.233 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.233 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.800 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:33.800 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.800 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.800 [2024-10-30 10:41:54.993096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:33.800 [2024-10-30 10:41:54.993159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:33.800 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.800 10:41:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:33.800 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.800 10:41:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.800 [2024-10-30 10:41:55.005176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.800 [2024-10-30 10:41:55.007859] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:33.800 [2024-10-30 10:41:55.008067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:33.800 [2024-10-30 10:41:55.008209] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:33.800 [2024-10-30 10:41:55.008360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.800 "name": "Existed_Raid", 00:13:33.800 "uuid": "eea14267-0f97-428d-9a6d-5d4831e0a210", 00:13:33.800 "strip_size_kb": 64, 00:13:33.800 "state": "configuring", 00:13:33.800 "raid_level": "concat", 00:13:33.800 "superblock": true, 00:13:33.800 "num_base_bdevs": 3, 00:13:33.800 "num_base_bdevs_discovered": 1, 00:13:33.800 "num_base_bdevs_operational": 3, 00:13:33.800 "base_bdevs_list": [ 00:13:33.800 { 00:13:33.800 "name": "BaseBdev1", 00:13:33.800 "uuid": "80a4707d-bf72-4c01-a29c-d0f8ad8240ad", 00:13:33.800 "is_configured": true, 00:13:33.800 "data_offset": 2048, 00:13:33.800 "data_size": 63488 00:13:33.800 }, 00:13:33.800 { 00:13:33.800 "name": "BaseBdev2", 00:13:33.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.800 "is_configured": false, 00:13:33.800 "data_offset": 0, 00:13:33.800 "data_size": 0 00:13:33.800 }, 00:13:33.800 { 00:13:33.800 "name": "BaseBdev3", 00:13:33.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.800 "is_configured": false, 00:13:33.800 "data_offset": 0, 00:13:33.800 "data_size": 0 00:13:33.800 } 00:13:33.800 ] 00:13:33.800 }' 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.800 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.367 [2024-10-30 10:41:55.572394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.367 BaseBdev2 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.367 [ 00:13:34.367 { 00:13:34.367 "name": "BaseBdev2", 00:13:34.367 "aliases": [ 00:13:34.367 "0dacb49e-0bc6-4a69-8e13-c3b8dd7de154" 00:13:34.367 ], 00:13:34.367 "product_name": "Malloc disk", 00:13:34.367 "block_size": 512, 00:13:34.367 "num_blocks": 65536, 00:13:34.367 "uuid": "0dacb49e-0bc6-4a69-8e13-c3b8dd7de154", 00:13:34.367 "assigned_rate_limits": { 00:13:34.367 "rw_ios_per_sec": 0, 00:13:34.367 "rw_mbytes_per_sec": 0, 00:13:34.367 "r_mbytes_per_sec": 0, 00:13:34.367 "w_mbytes_per_sec": 0 00:13:34.367 }, 00:13:34.367 "claimed": true, 00:13:34.367 "claim_type": "exclusive_write", 00:13:34.367 "zoned": false, 00:13:34.367 "supported_io_types": { 00:13:34.367 "read": true, 00:13:34.367 "write": true, 00:13:34.367 "unmap": true, 00:13:34.367 "flush": true, 00:13:34.367 "reset": true, 00:13:34.367 "nvme_admin": false, 00:13:34.367 "nvme_io": false, 00:13:34.367 "nvme_io_md": false, 00:13:34.367 "write_zeroes": true, 00:13:34.367 "zcopy": true, 00:13:34.367 "get_zone_info": false, 00:13:34.367 "zone_management": false, 00:13:34.367 "zone_append": false, 00:13:34.367 "compare": false, 00:13:34.367 "compare_and_write": false, 00:13:34.367 "abort": true, 00:13:34.367 "seek_hole": false, 00:13:34.367 "seek_data": false, 00:13:34.367 "copy": true, 00:13:34.367 "nvme_iov_md": false 00:13:34.367 }, 00:13:34.367 "memory_domains": [ 00:13:34.367 { 00:13:34.367 "dma_device_id": "system", 00:13:34.367 "dma_device_type": 1 00:13:34.367 }, 00:13:34.367 { 00:13:34.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.367 "dma_device_type": 2 00:13:34.367 } 00:13:34.367 ], 00:13:34.367 "driver_specific": {} 00:13:34.367 } 00:13:34.367 ] 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.367 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.367 "name": "Existed_Raid", 00:13:34.367 "uuid": "eea14267-0f97-428d-9a6d-5d4831e0a210", 00:13:34.367 "strip_size_kb": 64, 00:13:34.367 "state": "configuring", 00:13:34.367 "raid_level": "concat", 00:13:34.367 "superblock": true, 00:13:34.367 "num_base_bdevs": 3, 00:13:34.367 "num_base_bdevs_discovered": 2, 00:13:34.367 "num_base_bdevs_operational": 3, 00:13:34.367 "base_bdevs_list": [ 00:13:34.367 { 00:13:34.367 "name": "BaseBdev1", 00:13:34.367 "uuid": "80a4707d-bf72-4c01-a29c-d0f8ad8240ad", 00:13:34.367 "is_configured": true, 00:13:34.367 "data_offset": 2048, 00:13:34.367 "data_size": 63488 00:13:34.367 }, 00:13:34.367 { 00:13:34.367 "name": "BaseBdev2", 00:13:34.367 "uuid": "0dacb49e-0bc6-4a69-8e13-c3b8dd7de154", 00:13:34.368 "is_configured": true, 00:13:34.368 "data_offset": 2048, 00:13:34.368 "data_size": 63488 00:13:34.368 }, 00:13:34.368 { 00:13:34.368 "name": "BaseBdev3", 00:13:34.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.368 "is_configured": false, 00:13:34.368 "data_offset": 0, 00:13:34.368 "data_size": 0 00:13:34.368 } 00:13:34.368 ] 00:13:34.368 }' 00:13:34.368 10:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.368 10:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.935 [2024-10-30 10:41:56.198464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.935 [2024-10-30 10:41:56.198804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:34.935 [2024-10-30 10:41:56.198840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:34.935 [2024-10-30 10:41:56.199211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:34.935 BaseBdev3 00:13:34.935 [2024-10-30 10:41:56.199423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:34.935 [2024-10-30 10:41:56.199446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:34.935 [2024-10-30 10:41:56.199643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:34.935 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.936 [ 00:13:34.936 { 00:13:34.936 "name": "BaseBdev3", 00:13:34.936 "aliases": [ 00:13:34.936 "0bd07bda-be5b-4e1c-8605-30deb4da3d8f" 00:13:34.936 ], 00:13:34.936 "product_name": "Malloc disk", 00:13:34.936 "block_size": 512, 00:13:34.936 "num_blocks": 65536, 00:13:34.936 "uuid": "0bd07bda-be5b-4e1c-8605-30deb4da3d8f", 00:13:34.936 "assigned_rate_limits": { 00:13:34.936 "rw_ios_per_sec": 0, 00:13:34.936 "rw_mbytes_per_sec": 0, 00:13:34.936 "r_mbytes_per_sec": 0, 00:13:34.936 "w_mbytes_per_sec": 0 00:13:34.936 }, 00:13:34.936 "claimed": true, 00:13:34.936 "claim_type": "exclusive_write", 00:13:34.936 "zoned": false, 00:13:34.936 "supported_io_types": { 00:13:34.936 "read": true, 00:13:34.936 "write": true, 00:13:34.936 "unmap": true, 00:13:34.936 "flush": true, 00:13:34.936 "reset": true, 00:13:34.936 "nvme_admin": false, 00:13:34.936 "nvme_io": false, 00:13:34.936 "nvme_io_md": false, 00:13:34.936 "write_zeroes": true, 00:13:34.936 "zcopy": true, 00:13:34.936 "get_zone_info": false, 00:13:34.936 "zone_management": false, 00:13:34.936 "zone_append": false, 00:13:34.936 "compare": false, 00:13:34.936 "compare_and_write": false, 00:13:34.936 "abort": true, 00:13:34.936 "seek_hole": false, 00:13:34.936 "seek_data": false, 00:13:34.936 "copy": true, 00:13:34.936 "nvme_iov_md": false 00:13:34.936 }, 00:13:34.936 "memory_domains": [ 00:13:34.936 { 00:13:34.936 "dma_device_id": "system", 00:13:34.936 "dma_device_type": 1 00:13:34.936 }, 00:13:34.936 { 00:13:34.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.936 "dma_device_type": 2 00:13:34.936 } 00:13:34.936 ], 00:13:34.936 "driver_specific": {} 00:13:34.936 } 00:13:34.936 ] 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.936 "name": "Existed_Raid", 00:13:34.936 "uuid": "eea14267-0f97-428d-9a6d-5d4831e0a210", 00:13:34.936 "strip_size_kb": 64, 00:13:34.936 "state": "online", 00:13:34.936 "raid_level": "concat", 00:13:34.936 "superblock": true, 00:13:34.936 "num_base_bdevs": 3, 00:13:34.936 "num_base_bdevs_discovered": 3, 00:13:34.936 "num_base_bdevs_operational": 3, 00:13:34.936 "base_bdevs_list": [ 00:13:34.936 { 00:13:34.936 "name": "BaseBdev1", 00:13:34.936 "uuid": "80a4707d-bf72-4c01-a29c-d0f8ad8240ad", 00:13:34.936 "is_configured": true, 00:13:34.936 "data_offset": 2048, 00:13:34.936 "data_size": 63488 00:13:34.936 }, 00:13:34.936 { 00:13:34.936 "name": "BaseBdev2", 00:13:34.936 "uuid": "0dacb49e-0bc6-4a69-8e13-c3b8dd7de154", 00:13:34.936 "is_configured": true, 00:13:34.936 "data_offset": 2048, 00:13:34.936 "data_size": 63488 00:13:34.936 }, 00:13:34.936 { 00:13:34.936 "name": "BaseBdev3", 00:13:34.936 "uuid": "0bd07bda-be5b-4e1c-8605-30deb4da3d8f", 00:13:34.936 "is_configured": true, 00:13:34.936 "data_offset": 2048, 00:13:34.936 "data_size": 63488 00:13:34.936 } 00:13:34.936 ] 00:13:34.936 }' 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.936 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.504 [2024-10-30 10:41:56.783131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.504 "name": "Existed_Raid", 00:13:35.504 "aliases": [ 00:13:35.504 "eea14267-0f97-428d-9a6d-5d4831e0a210" 00:13:35.504 ], 00:13:35.504 "product_name": "Raid Volume", 00:13:35.504 "block_size": 512, 00:13:35.504 "num_blocks": 190464, 00:13:35.504 "uuid": "eea14267-0f97-428d-9a6d-5d4831e0a210", 00:13:35.504 "assigned_rate_limits": { 00:13:35.504 "rw_ios_per_sec": 0, 00:13:35.504 "rw_mbytes_per_sec": 0, 00:13:35.504 "r_mbytes_per_sec": 0, 00:13:35.504 "w_mbytes_per_sec": 0 00:13:35.504 }, 00:13:35.504 "claimed": false, 00:13:35.504 "zoned": false, 00:13:35.504 "supported_io_types": { 00:13:35.504 "read": true, 00:13:35.504 "write": true, 00:13:35.504 "unmap": true, 00:13:35.504 "flush": true, 00:13:35.504 "reset": true, 00:13:35.504 "nvme_admin": false, 00:13:35.504 "nvme_io": false, 00:13:35.504 "nvme_io_md": false, 00:13:35.504 "write_zeroes": true, 00:13:35.504 "zcopy": false, 00:13:35.504 "get_zone_info": false, 00:13:35.504 "zone_management": false, 00:13:35.504 "zone_append": false, 00:13:35.504 "compare": false, 00:13:35.504 "compare_and_write": false, 00:13:35.504 "abort": false, 00:13:35.504 "seek_hole": false, 00:13:35.504 "seek_data": false, 00:13:35.504 "copy": false, 00:13:35.504 "nvme_iov_md": false 00:13:35.504 }, 00:13:35.504 "memory_domains": [ 00:13:35.504 { 00:13:35.504 "dma_device_id": "system", 00:13:35.504 "dma_device_type": 1 00:13:35.504 }, 00:13:35.504 { 00:13:35.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.504 "dma_device_type": 2 00:13:35.504 }, 00:13:35.504 { 00:13:35.504 "dma_device_id": "system", 00:13:35.504 "dma_device_type": 1 00:13:35.504 }, 00:13:35.504 { 00:13:35.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.504 "dma_device_type": 2 00:13:35.504 }, 00:13:35.504 { 00:13:35.504 "dma_device_id": "system", 00:13:35.504 "dma_device_type": 1 00:13:35.504 }, 00:13:35.504 { 00:13:35.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.504 "dma_device_type": 2 00:13:35.504 } 00:13:35.504 ], 00:13:35.504 "driver_specific": { 00:13:35.504 "raid": { 00:13:35.504 "uuid": "eea14267-0f97-428d-9a6d-5d4831e0a210", 00:13:35.504 "strip_size_kb": 64, 00:13:35.504 "state": "online", 00:13:35.504 "raid_level": "concat", 00:13:35.504 "superblock": true, 00:13:35.504 "num_base_bdevs": 3, 00:13:35.504 "num_base_bdevs_discovered": 3, 00:13:35.504 "num_base_bdevs_operational": 3, 00:13:35.504 "base_bdevs_list": [ 00:13:35.504 { 00:13:35.504 "name": "BaseBdev1", 00:13:35.504 "uuid": "80a4707d-bf72-4c01-a29c-d0f8ad8240ad", 00:13:35.504 "is_configured": true, 00:13:35.504 "data_offset": 2048, 00:13:35.504 "data_size": 63488 00:13:35.504 }, 00:13:35.504 { 00:13:35.504 "name": "BaseBdev2", 00:13:35.504 "uuid": "0dacb49e-0bc6-4a69-8e13-c3b8dd7de154", 00:13:35.504 "is_configured": true, 00:13:35.504 "data_offset": 2048, 00:13:35.504 "data_size": 63488 00:13:35.504 }, 00:13:35.504 { 00:13:35.504 "name": "BaseBdev3", 00:13:35.504 "uuid": "0bd07bda-be5b-4e1c-8605-30deb4da3d8f", 00:13:35.504 "is_configured": true, 00:13:35.504 "data_offset": 2048, 00:13:35.504 "data_size": 63488 00:13:35.504 } 00:13:35.504 ] 00:13:35.504 } 00:13:35.504 } 00:13:35.504 }' 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:35.504 BaseBdev2 00:13:35.504 BaseBdev3' 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.504 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.505 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.763 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.763 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.763 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.763 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:35.763 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.763 10:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.763 10:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.763 [2024-10-30 10:41:57.102893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.763 [2024-10-30 10:41:57.102928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.763 [2024-10-30 10:41:57.103032] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.763 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.024 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.024 "name": "Existed_Raid", 00:13:36.024 "uuid": "eea14267-0f97-428d-9a6d-5d4831e0a210", 00:13:36.024 "strip_size_kb": 64, 00:13:36.024 "state": "offline", 00:13:36.024 "raid_level": "concat", 00:13:36.024 "superblock": true, 00:13:36.024 "num_base_bdevs": 3, 00:13:36.024 "num_base_bdevs_discovered": 2, 00:13:36.024 "num_base_bdevs_operational": 2, 00:13:36.024 "base_bdevs_list": [ 00:13:36.024 { 00:13:36.024 "name": null, 00:13:36.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.024 "is_configured": false, 00:13:36.024 "data_offset": 0, 00:13:36.024 "data_size": 63488 00:13:36.024 }, 00:13:36.024 { 00:13:36.024 "name": "BaseBdev2", 00:13:36.024 "uuid": "0dacb49e-0bc6-4a69-8e13-c3b8dd7de154", 00:13:36.024 "is_configured": true, 00:13:36.024 "data_offset": 2048, 00:13:36.024 "data_size": 63488 00:13:36.024 }, 00:13:36.024 { 00:13:36.024 "name": "BaseBdev3", 00:13:36.024 "uuid": "0bd07bda-be5b-4e1c-8605-30deb4da3d8f", 00:13:36.024 "is_configured": true, 00:13:36.024 "data_offset": 2048, 00:13:36.024 "data_size": 63488 00:13:36.024 } 00:13:36.024 ] 00:13:36.024 }' 00:13:36.024 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.024 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.283 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.283 [2024-10-30 10:41:57.741031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.542 [2024-10-30 10:41:57.882532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:36.542 [2024-10-30 10:41:57.882593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:36.542 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.543 10:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.802 BaseBdev2 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.802 [ 00:13:36.802 { 00:13:36.802 "name": "BaseBdev2", 00:13:36.802 "aliases": [ 00:13:36.802 "bf17e7d4-9bd5-4308-aa81-acf77a05b419" 00:13:36.802 ], 00:13:36.802 "product_name": "Malloc disk", 00:13:36.802 "block_size": 512, 00:13:36.802 "num_blocks": 65536, 00:13:36.802 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:36.802 "assigned_rate_limits": { 00:13:36.802 "rw_ios_per_sec": 0, 00:13:36.802 "rw_mbytes_per_sec": 0, 00:13:36.802 "r_mbytes_per_sec": 0, 00:13:36.802 "w_mbytes_per_sec": 0 00:13:36.802 }, 00:13:36.802 "claimed": false, 00:13:36.802 "zoned": false, 00:13:36.802 "supported_io_types": { 00:13:36.802 "read": true, 00:13:36.802 "write": true, 00:13:36.802 "unmap": true, 00:13:36.802 "flush": true, 00:13:36.802 "reset": true, 00:13:36.802 "nvme_admin": false, 00:13:36.802 "nvme_io": false, 00:13:36.802 "nvme_io_md": false, 00:13:36.802 "write_zeroes": true, 00:13:36.802 "zcopy": true, 00:13:36.802 "get_zone_info": false, 00:13:36.802 "zone_management": false, 00:13:36.802 "zone_append": false, 00:13:36.802 "compare": false, 00:13:36.802 "compare_and_write": false, 00:13:36.802 "abort": true, 00:13:36.802 "seek_hole": false, 00:13:36.802 "seek_data": false, 00:13:36.802 "copy": true, 00:13:36.802 "nvme_iov_md": false 00:13:36.802 }, 00:13:36.802 "memory_domains": [ 00:13:36.802 { 00:13:36.802 "dma_device_id": "system", 00:13:36.802 "dma_device_type": 1 00:13:36.802 }, 00:13:36.802 { 00:13:36.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.802 "dma_device_type": 2 00:13:36.802 } 00:13:36.802 ], 00:13:36.802 "driver_specific": {} 00:13:36.802 } 00:13:36.802 ] 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:36.802 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.803 BaseBdev3 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.803 [ 00:13:36.803 { 00:13:36.803 "name": "BaseBdev3", 00:13:36.803 "aliases": [ 00:13:36.803 "d450a455-b679-4286-9f98-f722a04545c4" 00:13:36.803 ], 00:13:36.803 "product_name": "Malloc disk", 00:13:36.803 "block_size": 512, 00:13:36.803 "num_blocks": 65536, 00:13:36.803 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:36.803 "assigned_rate_limits": { 00:13:36.803 "rw_ios_per_sec": 0, 00:13:36.803 "rw_mbytes_per_sec": 0, 00:13:36.803 "r_mbytes_per_sec": 0, 00:13:36.803 "w_mbytes_per_sec": 0 00:13:36.803 }, 00:13:36.803 "claimed": false, 00:13:36.803 "zoned": false, 00:13:36.803 "supported_io_types": { 00:13:36.803 "read": true, 00:13:36.803 "write": true, 00:13:36.803 "unmap": true, 00:13:36.803 "flush": true, 00:13:36.803 "reset": true, 00:13:36.803 "nvme_admin": false, 00:13:36.803 "nvme_io": false, 00:13:36.803 "nvme_io_md": false, 00:13:36.803 "write_zeroes": true, 00:13:36.803 "zcopy": true, 00:13:36.803 "get_zone_info": false, 00:13:36.803 "zone_management": false, 00:13:36.803 "zone_append": false, 00:13:36.803 "compare": false, 00:13:36.803 "compare_and_write": false, 00:13:36.803 "abort": true, 00:13:36.803 "seek_hole": false, 00:13:36.803 "seek_data": false, 00:13:36.803 "copy": true, 00:13:36.803 "nvme_iov_md": false 00:13:36.803 }, 00:13:36.803 "memory_domains": [ 00:13:36.803 { 00:13:36.803 "dma_device_id": "system", 00:13:36.803 "dma_device_type": 1 00:13:36.803 }, 00:13:36.803 { 00:13:36.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.803 "dma_device_type": 2 00:13:36.803 } 00:13:36.803 ], 00:13:36.803 "driver_specific": {} 00:13:36.803 } 00:13:36.803 ] 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.803 [2024-10-30 10:41:58.167107] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:36.803 [2024-10-30 10:41:58.167305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:36.803 [2024-10-30 10:41:58.167465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.803 [2024-10-30 10:41:58.169892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.803 "name": "Existed_Raid", 00:13:36.803 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:36.803 "strip_size_kb": 64, 00:13:36.803 "state": "configuring", 00:13:36.803 "raid_level": "concat", 00:13:36.803 "superblock": true, 00:13:36.803 "num_base_bdevs": 3, 00:13:36.803 "num_base_bdevs_discovered": 2, 00:13:36.803 "num_base_bdevs_operational": 3, 00:13:36.803 "base_bdevs_list": [ 00:13:36.803 { 00:13:36.803 "name": "BaseBdev1", 00:13:36.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.803 "is_configured": false, 00:13:36.803 "data_offset": 0, 00:13:36.803 "data_size": 0 00:13:36.803 }, 00:13:36.803 { 00:13:36.803 "name": "BaseBdev2", 00:13:36.803 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:36.803 "is_configured": true, 00:13:36.803 "data_offset": 2048, 00:13:36.803 "data_size": 63488 00:13:36.803 }, 00:13:36.803 { 00:13:36.803 "name": "BaseBdev3", 00:13:36.803 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:36.803 "is_configured": true, 00:13:36.803 "data_offset": 2048, 00:13:36.803 "data_size": 63488 00:13:36.803 } 00:13:36.803 ] 00:13:36.803 }' 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.803 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.370 [2024-10-30 10:41:58.691235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.370 "name": "Existed_Raid", 00:13:37.370 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:37.370 "strip_size_kb": 64, 00:13:37.370 "state": "configuring", 00:13:37.370 "raid_level": "concat", 00:13:37.370 "superblock": true, 00:13:37.370 "num_base_bdevs": 3, 00:13:37.370 "num_base_bdevs_discovered": 1, 00:13:37.370 "num_base_bdevs_operational": 3, 00:13:37.370 "base_bdevs_list": [ 00:13:37.370 { 00:13:37.370 "name": "BaseBdev1", 00:13:37.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.370 "is_configured": false, 00:13:37.370 "data_offset": 0, 00:13:37.370 "data_size": 0 00:13:37.370 }, 00:13:37.370 { 00:13:37.370 "name": null, 00:13:37.370 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:37.370 "is_configured": false, 00:13:37.370 "data_offset": 0, 00:13:37.370 "data_size": 63488 00:13:37.370 }, 00:13:37.370 { 00:13:37.370 "name": "BaseBdev3", 00:13:37.370 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:37.370 "is_configured": true, 00:13:37.370 "data_offset": 2048, 00:13:37.370 "data_size": 63488 00:13:37.370 } 00:13:37.370 ] 00:13:37.370 }' 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.370 10:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.938 [2024-10-30 10:41:59.312733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.938 BaseBdev1 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.938 [ 00:13:37.938 { 00:13:37.938 "name": "BaseBdev1", 00:13:37.938 "aliases": [ 00:13:37.938 "94300183-47ca-4712-823a-1e715e09e723" 00:13:37.938 ], 00:13:37.938 "product_name": "Malloc disk", 00:13:37.938 "block_size": 512, 00:13:37.938 "num_blocks": 65536, 00:13:37.938 "uuid": "94300183-47ca-4712-823a-1e715e09e723", 00:13:37.938 "assigned_rate_limits": { 00:13:37.938 "rw_ios_per_sec": 0, 00:13:37.938 "rw_mbytes_per_sec": 0, 00:13:37.938 "r_mbytes_per_sec": 0, 00:13:37.938 "w_mbytes_per_sec": 0 00:13:37.938 }, 00:13:37.938 "claimed": true, 00:13:37.938 "claim_type": "exclusive_write", 00:13:37.938 "zoned": false, 00:13:37.938 "supported_io_types": { 00:13:37.938 "read": true, 00:13:37.938 "write": true, 00:13:37.938 "unmap": true, 00:13:37.938 "flush": true, 00:13:37.938 "reset": true, 00:13:37.938 "nvme_admin": false, 00:13:37.938 "nvme_io": false, 00:13:37.938 "nvme_io_md": false, 00:13:37.938 "write_zeroes": true, 00:13:37.938 "zcopy": true, 00:13:37.938 "get_zone_info": false, 00:13:37.938 "zone_management": false, 00:13:37.938 "zone_append": false, 00:13:37.938 "compare": false, 00:13:37.938 "compare_and_write": false, 00:13:37.938 "abort": true, 00:13:37.938 "seek_hole": false, 00:13:37.938 "seek_data": false, 00:13:37.938 "copy": true, 00:13:37.938 "nvme_iov_md": false 00:13:37.938 }, 00:13:37.938 "memory_domains": [ 00:13:37.938 { 00:13:37.938 "dma_device_id": "system", 00:13:37.938 "dma_device_type": 1 00:13:37.938 }, 00:13:37.938 { 00:13:37.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.938 "dma_device_type": 2 00:13:37.938 } 00:13:37.938 ], 00:13:37.938 "driver_specific": {} 00:13:37.938 } 00:13:37.938 ] 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.938 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.218 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.218 "name": "Existed_Raid", 00:13:38.218 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:38.218 "strip_size_kb": 64, 00:13:38.218 "state": "configuring", 00:13:38.218 "raid_level": "concat", 00:13:38.218 "superblock": true, 00:13:38.218 "num_base_bdevs": 3, 00:13:38.218 "num_base_bdevs_discovered": 2, 00:13:38.218 "num_base_bdevs_operational": 3, 00:13:38.218 "base_bdevs_list": [ 00:13:38.218 { 00:13:38.218 "name": "BaseBdev1", 00:13:38.218 "uuid": "94300183-47ca-4712-823a-1e715e09e723", 00:13:38.218 "is_configured": true, 00:13:38.218 "data_offset": 2048, 00:13:38.218 "data_size": 63488 00:13:38.218 }, 00:13:38.218 { 00:13:38.218 "name": null, 00:13:38.218 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:38.218 "is_configured": false, 00:13:38.218 "data_offset": 0, 00:13:38.218 "data_size": 63488 00:13:38.218 }, 00:13:38.218 { 00:13:38.218 "name": "BaseBdev3", 00:13:38.218 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:38.218 "is_configured": true, 00:13:38.218 "data_offset": 2048, 00:13:38.218 "data_size": 63488 00:13:38.218 } 00:13:38.218 ] 00:13:38.218 }' 00:13:38.218 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.218 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.475 [2024-10-30 10:41:59.912968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.475 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.733 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.733 "name": "Existed_Raid", 00:13:38.733 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:38.733 "strip_size_kb": 64, 00:13:38.733 "state": "configuring", 00:13:38.733 "raid_level": "concat", 00:13:38.733 "superblock": true, 00:13:38.733 "num_base_bdevs": 3, 00:13:38.733 "num_base_bdevs_discovered": 1, 00:13:38.733 "num_base_bdevs_operational": 3, 00:13:38.733 "base_bdevs_list": [ 00:13:38.733 { 00:13:38.733 "name": "BaseBdev1", 00:13:38.733 "uuid": "94300183-47ca-4712-823a-1e715e09e723", 00:13:38.733 "is_configured": true, 00:13:38.733 "data_offset": 2048, 00:13:38.733 "data_size": 63488 00:13:38.733 }, 00:13:38.733 { 00:13:38.733 "name": null, 00:13:38.733 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:38.733 "is_configured": false, 00:13:38.733 "data_offset": 0, 00:13:38.733 "data_size": 63488 00:13:38.733 }, 00:13:38.733 { 00:13:38.733 "name": null, 00:13:38.733 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:38.733 "is_configured": false, 00:13:38.733 "data_offset": 0, 00:13:38.733 "data_size": 63488 00:13:38.733 } 00:13:38.733 ] 00:13:38.733 }' 00:13:38.733 10:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.733 10:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.991 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.991 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:38.991 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.991 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.991 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.250 [2024-10-30 10:42:00.485175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.250 "name": "Existed_Raid", 00:13:39.250 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:39.250 "strip_size_kb": 64, 00:13:39.250 "state": "configuring", 00:13:39.250 "raid_level": "concat", 00:13:39.250 "superblock": true, 00:13:39.250 "num_base_bdevs": 3, 00:13:39.250 "num_base_bdevs_discovered": 2, 00:13:39.250 "num_base_bdevs_operational": 3, 00:13:39.250 "base_bdevs_list": [ 00:13:39.250 { 00:13:39.250 "name": "BaseBdev1", 00:13:39.250 "uuid": "94300183-47ca-4712-823a-1e715e09e723", 00:13:39.250 "is_configured": true, 00:13:39.250 "data_offset": 2048, 00:13:39.250 "data_size": 63488 00:13:39.250 }, 00:13:39.250 { 00:13:39.250 "name": null, 00:13:39.250 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:39.250 "is_configured": false, 00:13:39.250 "data_offset": 0, 00:13:39.250 "data_size": 63488 00:13:39.250 }, 00:13:39.250 { 00:13:39.250 "name": "BaseBdev3", 00:13:39.250 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:39.250 "is_configured": true, 00:13:39.250 "data_offset": 2048, 00:13:39.250 "data_size": 63488 00:13:39.250 } 00:13:39.250 ] 00:13:39.250 }' 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.250 10:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.818 10:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.818 [2024-10-30 10:42:01.057353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.818 "name": "Existed_Raid", 00:13:39.818 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:39.818 "strip_size_kb": 64, 00:13:39.818 "state": "configuring", 00:13:39.818 "raid_level": "concat", 00:13:39.818 "superblock": true, 00:13:39.818 "num_base_bdevs": 3, 00:13:39.818 "num_base_bdevs_discovered": 1, 00:13:39.818 "num_base_bdevs_operational": 3, 00:13:39.818 "base_bdevs_list": [ 00:13:39.818 { 00:13:39.818 "name": null, 00:13:39.818 "uuid": "94300183-47ca-4712-823a-1e715e09e723", 00:13:39.818 "is_configured": false, 00:13:39.818 "data_offset": 0, 00:13:39.818 "data_size": 63488 00:13:39.818 }, 00:13:39.818 { 00:13:39.818 "name": null, 00:13:39.818 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:39.818 "is_configured": false, 00:13:39.818 "data_offset": 0, 00:13:39.818 "data_size": 63488 00:13:39.818 }, 00:13:39.818 { 00:13:39.818 "name": "BaseBdev3", 00:13:39.818 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:39.818 "is_configured": true, 00:13:39.818 "data_offset": 2048, 00:13:39.818 "data_size": 63488 00:13:39.818 } 00:13:39.818 ] 00:13:39.818 }' 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.818 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.385 [2024-10-30 10:42:01.713544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.385 "name": "Existed_Raid", 00:13:40.385 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:40.385 "strip_size_kb": 64, 00:13:40.385 "state": "configuring", 00:13:40.385 "raid_level": "concat", 00:13:40.385 "superblock": true, 00:13:40.385 "num_base_bdevs": 3, 00:13:40.385 "num_base_bdevs_discovered": 2, 00:13:40.385 "num_base_bdevs_operational": 3, 00:13:40.385 "base_bdevs_list": [ 00:13:40.385 { 00:13:40.385 "name": null, 00:13:40.385 "uuid": "94300183-47ca-4712-823a-1e715e09e723", 00:13:40.385 "is_configured": false, 00:13:40.385 "data_offset": 0, 00:13:40.385 "data_size": 63488 00:13:40.385 }, 00:13:40.385 { 00:13:40.385 "name": "BaseBdev2", 00:13:40.385 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:40.385 "is_configured": true, 00:13:40.385 "data_offset": 2048, 00:13:40.385 "data_size": 63488 00:13:40.385 }, 00:13:40.385 { 00:13:40.385 "name": "BaseBdev3", 00:13:40.385 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:40.385 "is_configured": true, 00:13:40.385 "data_offset": 2048, 00:13:40.385 "data_size": 63488 00:13:40.385 } 00:13:40.385 ] 00:13:40.385 }' 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.385 10:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 94300183-47ca-4712-823a-1e715e09e723 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.952 [2024-10-30 10:42:02.371521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:40.952 [2024-10-30 10:42:02.371810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:40.952 [2024-10-30 10:42:02.371834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:40.952 NewBaseBdev 00:13:40.952 [2024-10-30 10:42:02.372207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:40.952 [2024-10-30 10:42:02.372404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:40.952 [2024-10-30 10:42:02.372430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:40.952 [2024-10-30 10:42:02.372597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.952 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.952 [ 00:13:40.952 { 00:13:40.952 "name": "NewBaseBdev", 00:13:40.952 "aliases": [ 00:13:40.952 "94300183-47ca-4712-823a-1e715e09e723" 00:13:40.953 ], 00:13:40.953 "product_name": "Malloc disk", 00:13:40.953 "block_size": 512, 00:13:40.953 "num_blocks": 65536, 00:13:40.953 "uuid": "94300183-47ca-4712-823a-1e715e09e723", 00:13:40.953 "assigned_rate_limits": { 00:13:40.953 "rw_ios_per_sec": 0, 00:13:40.953 "rw_mbytes_per_sec": 0, 00:13:40.953 "r_mbytes_per_sec": 0, 00:13:40.953 "w_mbytes_per_sec": 0 00:13:40.953 }, 00:13:40.953 "claimed": true, 00:13:40.953 "claim_type": "exclusive_write", 00:13:40.953 "zoned": false, 00:13:40.953 "supported_io_types": { 00:13:40.953 "read": true, 00:13:40.953 "write": true, 00:13:40.953 "unmap": true, 00:13:40.953 "flush": true, 00:13:40.953 "reset": true, 00:13:40.953 "nvme_admin": false, 00:13:40.953 "nvme_io": false, 00:13:40.953 "nvme_io_md": false, 00:13:40.953 "write_zeroes": true, 00:13:40.953 "zcopy": true, 00:13:40.953 "get_zone_info": false, 00:13:40.953 "zone_management": false, 00:13:40.953 "zone_append": false, 00:13:40.953 "compare": false, 00:13:40.953 "compare_and_write": false, 00:13:40.953 "abort": true, 00:13:40.953 "seek_hole": false, 00:13:40.953 "seek_data": false, 00:13:40.953 "copy": true, 00:13:40.953 "nvme_iov_md": false 00:13:40.953 }, 00:13:40.953 "memory_domains": [ 00:13:40.953 { 00:13:40.953 "dma_device_id": "system", 00:13:40.953 "dma_device_type": 1 00:13:40.953 }, 00:13:40.953 { 00:13:40.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.953 "dma_device_type": 2 00:13:40.953 } 00:13:40.953 ], 00:13:40.953 "driver_specific": {} 00:13:40.953 } 00:13:40.953 ] 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.953 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.211 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.211 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.211 "name": "Existed_Raid", 00:13:41.211 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:41.211 "strip_size_kb": 64, 00:13:41.211 "state": "online", 00:13:41.211 "raid_level": "concat", 00:13:41.211 "superblock": true, 00:13:41.211 "num_base_bdevs": 3, 00:13:41.211 "num_base_bdevs_discovered": 3, 00:13:41.211 "num_base_bdevs_operational": 3, 00:13:41.211 "base_bdevs_list": [ 00:13:41.211 { 00:13:41.211 "name": "NewBaseBdev", 00:13:41.211 "uuid": "94300183-47ca-4712-823a-1e715e09e723", 00:13:41.211 "is_configured": true, 00:13:41.211 "data_offset": 2048, 00:13:41.211 "data_size": 63488 00:13:41.211 }, 00:13:41.211 { 00:13:41.211 "name": "BaseBdev2", 00:13:41.211 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:41.211 "is_configured": true, 00:13:41.211 "data_offset": 2048, 00:13:41.211 "data_size": 63488 00:13:41.211 }, 00:13:41.211 { 00:13:41.211 "name": "BaseBdev3", 00:13:41.211 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:41.211 "is_configured": true, 00:13:41.211 "data_offset": 2048, 00:13:41.211 "data_size": 63488 00:13:41.211 } 00:13:41.211 ] 00:13:41.211 }' 00:13:41.211 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.211 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.470 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:41.471 [2024-10-30 10:42:02.920066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.471 10:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.729 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:41.729 "name": "Existed_Raid", 00:13:41.729 "aliases": [ 00:13:41.729 "c7323f26-5da3-44f8-83f7-c1348aed9a48" 00:13:41.729 ], 00:13:41.729 "product_name": "Raid Volume", 00:13:41.729 "block_size": 512, 00:13:41.729 "num_blocks": 190464, 00:13:41.729 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:41.729 "assigned_rate_limits": { 00:13:41.729 "rw_ios_per_sec": 0, 00:13:41.729 "rw_mbytes_per_sec": 0, 00:13:41.729 "r_mbytes_per_sec": 0, 00:13:41.729 "w_mbytes_per_sec": 0 00:13:41.729 }, 00:13:41.729 "claimed": false, 00:13:41.729 "zoned": false, 00:13:41.729 "supported_io_types": { 00:13:41.729 "read": true, 00:13:41.729 "write": true, 00:13:41.729 "unmap": true, 00:13:41.729 "flush": true, 00:13:41.729 "reset": true, 00:13:41.729 "nvme_admin": false, 00:13:41.729 "nvme_io": false, 00:13:41.729 "nvme_io_md": false, 00:13:41.729 "write_zeroes": true, 00:13:41.729 "zcopy": false, 00:13:41.729 "get_zone_info": false, 00:13:41.729 "zone_management": false, 00:13:41.729 "zone_append": false, 00:13:41.729 "compare": false, 00:13:41.729 "compare_and_write": false, 00:13:41.729 "abort": false, 00:13:41.729 "seek_hole": false, 00:13:41.729 "seek_data": false, 00:13:41.729 "copy": false, 00:13:41.729 "nvme_iov_md": false 00:13:41.729 }, 00:13:41.729 "memory_domains": [ 00:13:41.729 { 00:13:41.729 "dma_device_id": "system", 00:13:41.729 "dma_device_type": 1 00:13:41.729 }, 00:13:41.729 { 00:13:41.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.729 "dma_device_type": 2 00:13:41.729 }, 00:13:41.729 { 00:13:41.729 "dma_device_id": "system", 00:13:41.729 "dma_device_type": 1 00:13:41.729 }, 00:13:41.729 { 00:13:41.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.729 "dma_device_type": 2 00:13:41.729 }, 00:13:41.729 { 00:13:41.729 "dma_device_id": "system", 00:13:41.729 "dma_device_type": 1 00:13:41.729 }, 00:13:41.729 { 00:13:41.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.729 "dma_device_type": 2 00:13:41.729 } 00:13:41.729 ], 00:13:41.729 "driver_specific": { 00:13:41.729 "raid": { 00:13:41.729 "uuid": "c7323f26-5da3-44f8-83f7-c1348aed9a48", 00:13:41.729 "strip_size_kb": 64, 00:13:41.729 "state": "online", 00:13:41.729 "raid_level": "concat", 00:13:41.729 "superblock": true, 00:13:41.729 "num_base_bdevs": 3, 00:13:41.729 "num_base_bdevs_discovered": 3, 00:13:41.729 "num_base_bdevs_operational": 3, 00:13:41.729 "base_bdevs_list": [ 00:13:41.729 { 00:13:41.729 "name": "NewBaseBdev", 00:13:41.729 "uuid": "94300183-47ca-4712-823a-1e715e09e723", 00:13:41.729 "is_configured": true, 00:13:41.729 "data_offset": 2048, 00:13:41.729 "data_size": 63488 00:13:41.729 }, 00:13:41.729 { 00:13:41.729 "name": "BaseBdev2", 00:13:41.729 "uuid": "bf17e7d4-9bd5-4308-aa81-acf77a05b419", 00:13:41.729 "is_configured": true, 00:13:41.729 "data_offset": 2048, 00:13:41.729 "data_size": 63488 00:13:41.729 }, 00:13:41.729 { 00:13:41.729 "name": "BaseBdev3", 00:13:41.729 "uuid": "d450a455-b679-4286-9f98-f722a04545c4", 00:13:41.729 "is_configured": true, 00:13:41.729 "data_offset": 2048, 00:13:41.729 "data_size": 63488 00:13:41.729 } 00:13:41.729 ] 00:13:41.729 } 00:13:41.729 } 00:13:41.729 }' 00:13:41.729 10:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.729 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:41.729 BaseBdev2 00:13:41.729 BaseBdev3' 00:13:41.729 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.730 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.988 [2024-10-30 10:42:03.235764] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.988 [2024-10-30 10:42:03.235802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.988 [2024-10-30 10:42:03.235896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.988 [2024-10-30 10:42:03.235970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.988 [2024-10-30 10:42:03.236008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66436 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66436 ']' 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66436 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66436 00:13:41.988 killing process with pid 66436 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66436' 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66436 00:13:41.988 [2024-10-30 10:42:03.275082] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.988 10:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66436 00:13:42.247 [2024-10-30 10:42:03.549367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.179 10:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:43.179 00:13:43.179 real 0m11.840s 00:13:43.179 user 0m19.698s 00:13:43.179 sys 0m1.598s 00:13:43.179 10:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:43.179 ************************************ 00:13:43.179 10:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.179 END TEST raid_state_function_test_sb 00:13:43.179 ************************************ 00:13:43.179 10:42:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:43.179 10:42:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:43.179 10:42:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:43.179 10:42:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.179 ************************************ 00:13:43.179 START TEST raid_superblock_test 00:13:43.179 ************************************ 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:43.179 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:43.436 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67073 00:13:43.436 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67073 00:13:43.436 10:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:43.436 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67073 ']' 00:13:43.436 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.436 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:43.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.436 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.436 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:43.436 10:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.436 [2024-10-30 10:42:04.752863] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:13:43.436 [2024-10-30 10:42:04.753086] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67073 ] 00:13:43.714 [2024-10-30 10:42:04.937967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.714 [2024-10-30 10:42:05.064891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.971 [2024-10-30 10:42:05.267270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.971 [2024-10-30 10:42:05.267339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.537 malloc1 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.537 [2024-10-30 10:42:05.821547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:44.537 [2024-10-30 10:42:05.821626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.537 [2024-10-30 10:42:05.821662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:44.537 [2024-10-30 10:42:05.821678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.537 [2024-10-30 10:42:05.824511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.537 [2024-10-30 10:42:05.824559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:44.537 pt1 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.537 malloc2 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.537 [2024-10-30 10:42:05.877210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:44.537 [2024-10-30 10:42:05.877281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.537 [2024-10-30 10:42:05.877314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:44.537 [2024-10-30 10:42:05.877330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.537 [2024-10-30 10:42:05.880097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.537 [2024-10-30 10:42:05.880156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:44.537 pt2 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.537 malloc3 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.537 [2024-10-30 10:42:05.943499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:44.537 [2024-10-30 10:42:05.943570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.537 [2024-10-30 10:42:05.943605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:44.537 [2024-10-30 10:42:05.943621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.537 [2024-10-30 10:42:05.946476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.537 [2024-10-30 10:42:05.946521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:44.537 pt3 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.537 [2024-10-30 10:42:05.955568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:44.537 [2024-10-30 10:42:05.958090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:44.537 [2024-10-30 10:42:05.958191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:44.537 [2024-10-30 10:42:05.958459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:44.537 [2024-10-30 10:42:05.958495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:44.537 [2024-10-30 10:42:05.958870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:44.537 [2024-10-30 10:42:05.959168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:44.537 [2024-10-30 10:42:05.959198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:44.537 [2024-10-30 10:42:05.959472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.537 10:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.795 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.795 "name": "raid_bdev1", 00:13:44.795 "uuid": "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f", 00:13:44.795 "strip_size_kb": 64, 00:13:44.795 "state": "online", 00:13:44.795 "raid_level": "concat", 00:13:44.795 "superblock": true, 00:13:44.795 "num_base_bdevs": 3, 00:13:44.795 "num_base_bdevs_discovered": 3, 00:13:44.795 "num_base_bdevs_operational": 3, 00:13:44.795 "base_bdevs_list": [ 00:13:44.795 { 00:13:44.795 "name": "pt1", 00:13:44.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.795 "is_configured": true, 00:13:44.795 "data_offset": 2048, 00:13:44.795 "data_size": 63488 00:13:44.795 }, 00:13:44.795 { 00:13:44.795 "name": "pt2", 00:13:44.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.795 "is_configured": true, 00:13:44.795 "data_offset": 2048, 00:13:44.795 "data_size": 63488 00:13:44.795 }, 00:13:44.795 { 00:13:44.795 "name": "pt3", 00:13:44.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.795 "is_configured": true, 00:13:44.795 "data_offset": 2048, 00:13:44.795 "data_size": 63488 00:13:44.795 } 00:13:44.795 ] 00:13:44.795 }' 00:13:44.795 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.795 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.052 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.053 [2024-10-30 10:42:06.488073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.053 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:45.311 "name": "raid_bdev1", 00:13:45.311 "aliases": [ 00:13:45.311 "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f" 00:13:45.311 ], 00:13:45.311 "product_name": "Raid Volume", 00:13:45.311 "block_size": 512, 00:13:45.311 "num_blocks": 190464, 00:13:45.311 "uuid": "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f", 00:13:45.311 "assigned_rate_limits": { 00:13:45.311 "rw_ios_per_sec": 0, 00:13:45.311 "rw_mbytes_per_sec": 0, 00:13:45.311 "r_mbytes_per_sec": 0, 00:13:45.311 "w_mbytes_per_sec": 0 00:13:45.311 }, 00:13:45.311 "claimed": false, 00:13:45.311 "zoned": false, 00:13:45.311 "supported_io_types": { 00:13:45.311 "read": true, 00:13:45.311 "write": true, 00:13:45.311 "unmap": true, 00:13:45.311 "flush": true, 00:13:45.311 "reset": true, 00:13:45.311 "nvme_admin": false, 00:13:45.311 "nvme_io": false, 00:13:45.311 "nvme_io_md": false, 00:13:45.311 "write_zeroes": true, 00:13:45.311 "zcopy": false, 00:13:45.311 "get_zone_info": false, 00:13:45.311 "zone_management": false, 00:13:45.311 "zone_append": false, 00:13:45.311 "compare": false, 00:13:45.311 "compare_and_write": false, 00:13:45.311 "abort": false, 00:13:45.311 "seek_hole": false, 00:13:45.311 "seek_data": false, 00:13:45.311 "copy": false, 00:13:45.311 "nvme_iov_md": false 00:13:45.311 }, 00:13:45.311 "memory_domains": [ 00:13:45.311 { 00:13:45.311 "dma_device_id": "system", 00:13:45.311 "dma_device_type": 1 00:13:45.311 }, 00:13:45.311 { 00:13:45.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.311 "dma_device_type": 2 00:13:45.311 }, 00:13:45.311 { 00:13:45.311 "dma_device_id": "system", 00:13:45.311 "dma_device_type": 1 00:13:45.311 }, 00:13:45.311 { 00:13:45.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.311 "dma_device_type": 2 00:13:45.311 }, 00:13:45.311 { 00:13:45.311 "dma_device_id": "system", 00:13:45.311 "dma_device_type": 1 00:13:45.311 }, 00:13:45.311 { 00:13:45.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.311 "dma_device_type": 2 00:13:45.311 } 00:13:45.311 ], 00:13:45.311 "driver_specific": { 00:13:45.311 "raid": { 00:13:45.311 "uuid": "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f", 00:13:45.311 "strip_size_kb": 64, 00:13:45.311 "state": "online", 00:13:45.311 "raid_level": "concat", 00:13:45.311 "superblock": true, 00:13:45.311 "num_base_bdevs": 3, 00:13:45.311 "num_base_bdevs_discovered": 3, 00:13:45.311 "num_base_bdevs_operational": 3, 00:13:45.311 "base_bdevs_list": [ 00:13:45.311 { 00:13:45.311 "name": "pt1", 00:13:45.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:45.311 "is_configured": true, 00:13:45.311 "data_offset": 2048, 00:13:45.311 "data_size": 63488 00:13:45.311 }, 00:13:45.311 { 00:13:45.311 "name": "pt2", 00:13:45.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.311 "is_configured": true, 00:13:45.311 "data_offset": 2048, 00:13:45.311 "data_size": 63488 00:13:45.311 }, 00:13:45.311 { 00:13:45.311 "name": "pt3", 00:13:45.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:45.311 "is_configured": true, 00:13:45.311 "data_offset": 2048, 00:13:45.311 "data_size": 63488 00:13:45.311 } 00:13:45.311 ] 00:13:45.311 } 00:13:45.311 } 00:13:45.311 }' 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:45.311 pt2 00:13:45.311 pt3' 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.311 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.312 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.312 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:45.312 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.312 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.312 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.312 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:45.569 [2024-10-30 10:42:06.812056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=422a3f5f-0f93-44b3-a43f-a02e0ee7c93f 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 422a3f5f-0f93-44b3-a43f-a02e0ee7c93f ']' 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.569 [2024-10-30 10:42:06.855692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.569 [2024-10-30 10:42:06.855732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.569 [2024-10-30 10:42:06.855833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.569 [2024-10-30 10:42:06.855923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.569 [2024-10-30 10:42:06.855948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.569 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:45.570 10:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.570 [2024-10-30 10:42:07.007822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:45.570 [2024-10-30 10:42:07.010414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:45.570 [2024-10-30 10:42:07.010507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:45.570 [2024-10-30 10:42:07.010578] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:45.570 [2024-10-30 10:42:07.010684] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:45.570 [2024-10-30 10:42:07.010729] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:45.570 [2024-10-30 10:42:07.010759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.570 [2024-10-30 10:42:07.010777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:45.570 request: 00:13:45.570 { 00:13:45.570 "name": "raid_bdev1", 00:13:45.570 "raid_level": "concat", 00:13:45.570 "base_bdevs": [ 00:13:45.570 "malloc1", 00:13:45.570 "malloc2", 00:13:45.570 "malloc3" 00:13:45.570 ], 00:13:45.570 "strip_size_kb": 64, 00:13:45.570 "superblock": false, 00:13:45.570 "method": "bdev_raid_create", 00:13:45.570 "req_id": 1 00:13:45.570 } 00:13:45.570 Got JSON-RPC error response 00:13:45.570 response: 00:13:45.570 { 00:13:45.570 "code": -17, 00:13:45.570 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:45.570 } 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.570 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.828 [2024-10-30 10:42:07.071788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:45.828 [2024-10-30 10:42:07.071848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.828 [2024-10-30 10:42:07.071877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:45.828 [2024-10-30 10:42:07.071892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.828 [2024-10-30 10:42:07.074869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.828 [2024-10-30 10:42:07.074914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:45.828 [2024-10-30 10:42:07.075037] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:45.828 [2024-10-30 10:42:07.075126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:45.828 pt1 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.828 "name": "raid_bdev1", 00:13:45.828 "uuid": "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f", 00:13:45.828 "strip_size_kb": 64, 00:13:45.828 "state": "configuring", 00:13:45.828 "raid_level": "concat", 00:13:45.828 "superblock": true, 00:13:45.828 "num_base_bdevs": 3, 00:13:45.828 "num_base_bdevs_discovered": 1, 00:13:45.828 "num_base_bdevs_operational": 3, 00:13:45.828 "base_bdevs_list": [ 00:13:45.828 { 00:13:45.828 "name": "pt1", 00:13:45.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:45.828 "is_configured": true, 00:13:45.828 "data_offset": 2048, 00:13:45.828 "data_size": 63488 00:13:45.828 }, 00:13:45.828 { 00:13:45.828 "name": null, 00:13:45.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:45.828 "is_configured": false, 00:13:45.828 "data_offset": 2048, 00:13:45.828 "data_size": 63488 00:13:45.828 }, 00:13:45.828 { 00:13:45.828 "name": null, 00:13:45.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:45.828 "is_configured": false, 00:13:45.828 "data_offset": 2048, 00:13:45.828 "data_size": 63488 00:13:45.828 } 00:13:45.828 ] 00:13:45.828 }' 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.828 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.395 [2024-10-30 10:42:07.591947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:46.395 [2024-10-30 10:42:07.592035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.395 [2024-10-30 10:42:07.592071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:46.395 [2024-10-30 10:42:07.592087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.395 [2024-10-30 10:42:07.592702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.395 [2024-10-30 10:42:07.592741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:46.395 [2024-10-30 10:42:07.592855] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:46.395 [2024-10-30 10:42:07.592888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:46.395 pt2 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.395 [2024-10-30 10:42:07.599937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.395 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.395 "name": "raid_bdev1", 00:13:46.395 "uuid": "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f", 00:13:46.395 "strip_size_kb": 64, 00:13:46.395 "state": "configuring", 00:13:46.395 "raid_level": "concat", 00:13:46.395 "superblock": true, 00:13:46.395 "num_base_bdevs": 3, 00:13:46.395 "num_base_bdevs_discovered": 1, 00:13:46.395 "num_base_bdevs_operational": 3, 00:13:46.395 "base_bdevs_list": [ 00:13:46.395 { 00:13:46.395 "name": "pt1", 00:13:46.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.395 "is_configured": true, 00:13:46.395 "data_offset": 2048, 00:13:46.396 "data_size": 63488 00:13:46.396 }, 00:13:46.396 { 00:13:46.396 "name": null, 00:13:46.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.396 "is_configured": false, 00:13:46.396 "data_offset": 0, 00:13:46.396 "data_size": 63488 00:13:46.396 }, 00:13:46.396 { 00:13:46.396 "name": null, 00:13:46.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.396 "is_configured": false, 00:13:46.396 "data_offset": 2048, 00:13:46.396 "data_size": 63488 00:13:46.396 } 00:13:46.396 ] 00:13:46.396 }' 00:13:46.396 10:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.396 10:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.654 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:46.654 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:46.654 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:46.654 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.654 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.912 [2024-10-30 10:42:08.124112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:46.912 [2024-10-30 10:42:08.124198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.912 [2024-10-30 10:42:08.124226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:46.912 [2024-10-30 10:42:08.124244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.912 [2024-10-30 10:42:08.124897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.912 [2024-10-30 10:42:08.124941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:46.912 [2024-10-30 10:42:08.125065] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:46.912 [2024-10-30 10:42:08.125121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:46.912 pt2 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.912 [2024-10-30 10:42:08.132088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:46.912 [2024-10-30 10:42:08.132158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.912 [2024-10-30 10:42:08.132181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:46.912 [2024-10-30 10:42:08.132197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.912 [2024-10-30 10:42:08.132747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.912 [2024-10-30 10:42:08.132806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:46.912 [2024-10-30 10:42:08.132914] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:46.912 [2024-10-30 10:42:08.133008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:46.912 [2024-10-30 10:42:08.133196] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:46.912 [2024-10-30 10:42:08.133228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:46.912 [2024-10-30 10:42:08.133572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:46.912 [2024-10-30 10:42:08.133798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:46.912 [2024-10-30 10:42:08.133824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:46.912 [2024-10-30 10:42:08.134048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.912 pt3 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.912 "name": "raid_bdev1", 00:13:46.912 "uuid": "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f", 00:13:46.912 "strip_size_kb": 64, 00:13:46.912 "state": "online", 00:13:46.912 "raid_level": "concat", 00:13:46.912 "superblock": true, 00:13:46.912 "num_base_bdevs": 3, 00:13:46.912 "num_base_bdevs_discovered": 3, 00:13:46.912 "num_base_bdevs_operational": 3, 00:13:46.912 "base_bdevs_list": [ 00:13:46.912 { 00:13:46.912 "name": "pt1", 00:13:46.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.912 "is_configured": true, 00:13:46.912 "data_offset": 2048, 00:13:46.912 "data_size": 63488 00:13:46.912 }, 00:13:46.912 { 00:13:46.912 "name": "pt2", 00:13:46.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.912 "is_configured": true, 00:13:46.912 "data_offset": 2048, 00:13:46.912 "data_size": 63488 00:13:46.912 }, 00:13:46.912 { 00:13:46.912 "name": "pt3", 00:13:46.912 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.912 "is_configured": true, 00:13:46.912 "data_offset": 2048, 00:13:46.912 "data_size": 63488 00:13:46.912 } 00:13:46.912 ] 00:13:46.912 }' 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.912 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:47.479 [2024-10-30 10:42:08.660627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.479 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:47.479 "name": "raid_bdev1", 00:13:47.479 "aliases": [ 00:13:47.479 "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f" 00:13:47.479 ], 00:13:47.479 "product_name": "Raid Volume", 00:13:47.479 "block_size": 512, 00:13:47.479 "num_blocks": 190464, 00:13:47.479 "uuid": "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f", 00:13:47.479 "assigned_rate_limits": { 00:13:47.479 "rw_ios_per_sec": 0, 00:13:47.479 "rw_mbytes_per_sec": 0, 00:13:47.479 "r_mbytes_per_sec": 0, 00:13:47.479 "w_mbytes_per_sec": 0 00:13:47.479 }, 00:13:47.479 "claimed": false, 00:13:47.480 "zoned": false, 00:13:47.480 "supported_io_types": { 00:13:47.480 "read": true, 00:13:47.480 "write": true, 00:13:47.480 "unmap": true, 00:13:47.480 "flush": true, 00:13:47.480 "reset": true, 00:13:47.480 "nvme_admin": false, 00:13:47.480 "nvme_io": false, 00:13:47.480 "nvme_io_md": false, 00:13:47.480 "write_zeroes": true, 00:13:47.480 "zcopy": false, 00:13:47.480 "get_zone_info": false, 00:13:47.480 "zone_management": false, 00:13:47.480 "zone_append": false, 00:13:47.480 "compare": false, 00:13:47.480 "compare_and_write": false, 00:13:47.480 "abort": false, 00:13:47.480 "seek_hole": false, 00:13:47.480 "seek_data": false, 00:13:47.480 "copy": false, 00:13:47.480 "nvme_iov_md": false 00:13:47.480 }, 00:13:47.480 "memory_domains": [ 00:13:47.480 { 00:13:47.480 "dma_device_id": "system", 00:13:47.480 "dma_device_type": 1 00:13:47.480 }, 00:13:47.480 { 00:13:47.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.480 "dma_device_type": 2 00:13:47.480 }, 00:13:47.480 { 00:13:47.480 "dma_device_id": "system", 00:13:47.480 "dma_device_type": 1 00:13:47.480 }, 00:13:47.480 { 00:13:47.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.480 "dma_device_type": 2 00:13:47.480 }, 00:13:47.480 { 00:13:47.480 "dma_device_id": "system", 00:13:47.480 "dma_device_type": 1 00:13:47.480 }, 00:13:47.480 { 00:13:47.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.480 "dma_device_type": 2 00:13:47.480 } 00:13:47.480 ], 00:13:47.480 "driver_specific": { 00:13:47.480 "raid": { 00:13:47.480 "uuid": "422a3f5f-0f93-44b3-a43f-a02e0ee7c93f", 00:13:47.480 "strip_size_kb": 64, 00:13:47.480 "state": "online", 00:13:47.480 "raid_level": "concat", 00:13:47.480 "superblock": true, 00:13:47.480 "num_base_bdevs": 3, 00:13:47.480 "num_base_bdevs_discovered": 3, 00:13:47.480 "num_base_bdevs_operational": 3, 00:13:47.480 "base_bdevs_list": [ 00:13:47.480 { 00:13:47.480 "name": "pt1", 00:13:47.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.480 "is_configured": true, 00:13:47.480 "data_offset": 2048, 00:13:47.480 "data_size": 63488 00:13:47.480 }, 00:13:47.480 { 00:13:47.480 "name": "pt2", 00:13:47.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.480 "is_configured": true, 00:13:47.480 "data_offset": 2048, 00:13:47.480 "data_size": 63488 00:13:47.480 }, 00:13:47.480 { 00:13:47.480 "name": "pt3", 00:13:47.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.480 "is_configured": true, 00:13:47.480 "data_offset": 2048, 00:13:47.480 "data_size": 63488 00:13:47.480 } 00:13:47.480 ] 00:13:47.480 } 00:13:47.480 } 00:13:47.480 }' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:47.480 pt2 00:13:47.480 pt3' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.480 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.739 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.739 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.739 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.739 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.739 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.739 10:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.739 10:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:47.739 [2024-10-30 10:42:09.000678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 422a3f5f-0f93-44b3-a43f-a02e0ee7c93f '!=' 422a3f5f-0f93-44b3-a43f-a02e0ee7c93f ']' 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67073 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67073 ']' 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67073 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67073 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:47.739 killing process with pid 67073 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67073' 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67073 00:13:47.739 [2024-10-30 10:42:09.074822] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.739 10:42:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67073 00:13:47.739 [2024-10-30 10:42:09.075007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.739 [2024-10-30 10:42:09.075102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.739 [2024-10-30 10:42:09.075134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:47.998 [2024-10-30 10:42:09.342656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.966 10:42:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:48.966 00:13:48.966 real 0m5.724s 00:13:48.966 user 0m8.694s 00:13:48.966 sys 0m0.828s 00:13:48.966 10:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:48.966 ************************************ 00:13:48.966 END TEST raid_superblock_test 00:13:48.966 ************************************ 00:13:48.966 10:42:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.966 10:42:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:48.966 10:42:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:48.966 10:42:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:48.966 10:42:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:48.966 ************************************ 00:13:48.966 START TEST raid_read_error_test 00:13:48.966 ************************************ 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:48.966 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rRzi63duNE 00:13:49.225 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67326 00:13:49.225 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67326 00:13:49.225 10:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:49.226 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67326 ']' 00:13:49.226 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.226 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:49.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.226 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.226 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:49.226 10:42:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.226 [2024-10-30 10:42:10.546149] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:13:49.226 [2024-10-30 10:42:10.546335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67326 ] 00:13:49.485 [2024-10-30 10:42:10.736130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.485 [2024-10-30 10:42:10.889215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.743 [2024-10-30 10:42:11.110717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.743 [2024-10-30 10:42:11.110812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 BaseBdev1_malloc 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 true 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 [2024-10-30 10:42:11.594721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:50.311 [2024-10-30 10:42:11.594786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.311 [2024-10-30 10:42:11.594814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:50.311 [2024-10-30 10:42:11.594832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.311 [2024-10-30 10:42:11.597554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.311 [2024-10-30 10:42:11.597601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.311 BaseBdev1 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 BaseBdev2_malloc 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 true 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 [2024-10-30 10:42:11.651563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:50.311 [2024-10-30 10:42:11.651632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.311 [2024-10-30 10:42:11.651668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:50.311 [2024-10-30 10:42:11.651689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.311 [2024-10-30 10:42:11.654429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.311 [2024-10-30 10:42:11.654477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:50.311 BaseBdev2 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 BaseBdev3_malloc 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 true 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 [2024-10-30 10:42:11.719992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:50.311 [2024-10-30 10:42:11.720070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.311 [2024-10-30 10:42:11.720097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:50.311 [2024-10-30 10:42:11.720114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.311 [2024-10-30 10:42:11.722856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.311 [2024-10-30 10:42:11.722905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:50.311 BaseBdev3 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.311 [2024-10-30 10:42:11.728100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.311 [2024-10-30 10:42:11.730529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.311 [2024-10-30 10:42:11.730636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.311 [2024-10-30 10:42:11.730920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:50.311 [2024-10-30 10:42:11.730951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:50.311 [2024-10-30 10:42:11.731299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:50.311 [2024-10-30 10:42:11.731516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:50.311 [2024-10-30 10:42:11.731537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:50.311 [2024-10-30 10:42:11.731719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.311 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.312 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.571 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.571 "name": "raid_bdev1", 00:13:50.571 "uuid": "de398af3-70ac-4e7f-bbb9-04ee9d678a5e", 00:13:50.571 "strip_size_kb": 64, 00:13:50.571 "state": "online", 00:13:50.571 "raid_level": "concat", 00:13:50.571 "superblock": true, 00:13:50.571 "num_base_bdevs": 3, 00:13:50.571 "num_base_bdevs_discovered": 3, 00:13:50.571 "num_base_bdevs_operational": 3, 00:13:50.571 "base_bdevs_list": [ 00:13:50.571 { 00:13:50.571 "name": "BaseBdev1", 00:13:50.571 "uuid": "d8e166fb-ce9f-5aa4-a271-807b8aecb795", 00:13:50.571 "is_configured": true, 00:13:50.571 "data_offset": 2048, 00:13:50.571 "data_size": 63488 00:13:50.571 }, 00:13:50.571 { 00:13:50.571 "name": "BaseBdev2", 00:13:50.571 "uuid": "3ba0fd2c-787d-5953-be16-7c3105ba6cc8", 00:13:50.571 "is_configured": true, 00:13:50.571 "data_offset": 2048, 00:13:50.571 "data_size": 63488 00:13:50.571 }, 00:13:50.571 { 00:13:50.571 "name": "BaseBdev3", 00:13:50.571 "uuid": "bce22397-f496-5e0d-a291-5d4aaedcd0d6", 00:13:50.571 "is_configured": true, 00:13:50.571 "data_offset": 2048, 00:13:50.571 "data_size": 63488 00:13:50.571 } 00:13:50.571 ] 00:13:50.571 }' 00:13:50.571 10:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.571 10:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.829 10:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:50.829 10:42:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:51.155 [2024-10-30 10:42:12.350226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:52.091 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:52.091 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.091 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.091 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.091 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:52.091 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.092 "name": "raid_bdev1", 00:13:52.092 "uuid": "de398af3-70ac-4e7f-bbb9-04ee9d678a5e", 00:13:52.092 "strip_size_kb": 64, 00:13:52.092 "state": "online", 00:13:52.092 "raid_level": "concat", 00:13:52.092 "superblock": true, 00:13:52.092 "num_base_bdevs": 3, 00:13:52.092 "num_base_bdevs_discovered": 3, 00:13:52.092 "num_base_bdevs_operational": 3, 00:13:52.092 "base_bdevs_list": [ 00:13:52.092 { 00:13:52.092 "name": "BaseBdev1", 00:13:52.092 "uuid": "d8e166fb-ce9f-5aa4-a271-807b8aecb795", 00:13:52.092 "is_configured": true, 00:13:52.092 "data_offset": 2048, 00:13:52.092 "data_size": 63488 00:13:52.092 }, 00:13:52.092 { 00:13:52.092 "name": "BaseBdev2", 00:13:52.092 "uuid": "3ba0fd2c-787d-5953-be16-7c3105ba6cc8", 00:13:52.092 "is_configured": true, 00:13:52.092 "data_offset": 2048, 00:13:52.092 "data_size": 63488 00:13:52.092 }, 00:13:52.092 { 00:13:52.092 "name": "BaseBdev3", 00:13:52.092 "uuid": "bce22397-f496-5e0d-a291-5d4aaedcd0d6", 00:13:52.092 "is_configured": true, 00:13:52.092 "data_offset": 2048, 00:13:52.092 "data_size": 63488 00:13:52.092 } 00:13:52.092 ] 00:13:52.092 }' 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.092 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.350 [2024-10-30 10:42:13.757158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:52.350 [2024-10-30 10:42:13.757192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.350 [2024-10-30 10:42:13.760702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.350 [2024-10-30 10:42:13.760902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.350 [2024-10-30 10:42:13.761016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.350 [2024-10-30 10:42:13.761305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:52.350 { 00:13:52.350 "results": [ 00:13:52.350 { 00:13:52.350 "job": "raid_bdev1", 00:13:52.350 "core_mask": "0x1", 00:13:52.350 "workload": "randrw", 00:13:52.350 "percentage": 50, 00:13:52.350 "status": "finished", 00:13:52.350 "queue_depth": 1, 00:13:52.350 "io_size": 131072, 00:13:52.350 "runtime": 1.403792, 00:13:52.350 "iops": 10896.201146608615, 00:13:52.350 "mibps": 1362.0251433260769, 00:13:52.350 "io_failed": 1, 00:13:52.350 "io_timeout": 0, 00:13:52.350 "avg_latency_us": 128.10500644808548, 00:13:52.350 "min_latency_us": 38.4, 00:13:52.350 "max_latency_us": 1861.8181818181818 00:13:52.350 } 00:13:52.350 ], 00:13:52.350 "core_count": 1 00:13:52.350 } 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67326 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67326 ']' 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67326 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67326 00:13:52.350 killing process with pid 67326 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67326' 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67326 00:13:52.350 [2024-10-30 10:42:13.794574] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.350 10:42:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67326 00:13:52.608 [2024-10-30 10:42:13.994797] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.983 10:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rRzi63duNE 00:13:53.983 10:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:53.983 10:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:53.983 ************************************ 00:13:53.983 END TEST raid_read_error_test 00:13:53.983 ************************************ 00:13:53.983 10:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:53.984 10:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:53.984 10:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:53.984 10:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:53.984 10:42:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:53.984 00:13:53.984 real 0m4.636s 00:13:53.984 user 0m5.763s 00:13:53.984 sys 0m0.585s 00:13:53.984 10:42:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:53.984 10:42:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.984 10:42:15 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:53.984 10:42:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:53.984 10:42:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:53.984 10:42:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.984 ************************************ 00:13:53.984 START TEST raid_write_error_test 00:13:53.984 ************************************ 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.az2l0YLNQb 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67477 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67477 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67477 ']' 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:53.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:53.984 10:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.984 [2024-10-30 10:42:15.282206] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:13:53.984 [2024-10-30 10:42:15.282381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67477 ] 00:13:54.243 [2024-10-30 10:42:15.465707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.243 [2024-10-30 10:42:15.589762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.502 [2024-10-30 10:42:15.794512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.502 [2024-10-30 10:42:15.794590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.761 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:54.761 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:54.761 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:54.761 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:54.761 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.761 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.021 BaseBdev1_malloc 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.021 true 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.021 [2024-10-30 10:42:16.279231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:55.021 [2024-10-30 10:42:16.279491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.021 [2024-10-30 10:42:16.279531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:55.021 [2024-10-30 10:42:16.279551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.021 [2024-10-30 10:42:16.282390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.021 [2024-10-30 10:42:16.282437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.021 BaseBdev1 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.021 BaseBdev2_malloc 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.021 true 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.021 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.021 [2024-10-30 10:42:16.335208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:55.021 [2024-10-30 10:42:16.335470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.021 [2024-10-30 10:42:16.335507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:55.021 [2024-10-30 10:42:16.335527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.021 [2024-10-30 10:42:16.338276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.021 [2024-10-30 10:42:16.338324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:55.021 BaseBdev2 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.022 BaseBdev3_malloc 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.022 true 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.022 [2024-10-30 10:42:16.416584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:55.022 [2024-10-30 10:42:16.416686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.022 [2024-10-30 10:42:16.416721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:55.022 [2024-10-30 10:42:16.416742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.022 [2024-10-30 10:42:16.419505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.022 [2024-10-30 10:42:16.419695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:55.022 BaseBdev3 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.022 [2024-10-30 10:42:16.428681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.022 [2024-10-30 10:42:16.431116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.022 [2024-10-30 10:42:16.431227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.022 [2024-10-30 10:42:16.431503] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:55.022 [2024-10-30 10:42:16.431521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:55.022 [2024-10-30 10:42:16.431847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:55.022 [2024-10-30 10:42:16.432069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:55.022 [2024-10-30 10:42:16.432094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:55.022 [2024-10-30 10:42:16.432275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.022 "name": "raid_bdev1", 00:13:55.022 "uuid": "1869a81c-d3a7-4cf0-b44c-a929b837cc41", 00:13:55.022 "strip_size_kb": 64, 00:13:55.022 "state": "online", 00:13:55.022 "raid_level": "concat", 00:13:55.022 "superblock": true, 00:13:55.022 "num_base_bdevs": 3, 00:13:55.022 "num_base_bdevs_discovered": 3, 00:13:55.022 "num_base_bdevs_operational": 3, 00:13:55.022 "base_bdevs_list": [ 00:13:55.022 { 00:13:55.022 "name": "BaseBdev1", 00:13:55.022 "uuid": "e76ef27e-52c4-5a3e-b1b7-ffef63a34a9f", 00:13:55.022 "is_configured": true, 00:13:55.022 "data_offset": 2048, 00:13:55.022 "data_size": 63488 00:13:55.022 }, 00:13:55.022 { 00:13:55.022 "name": "BaseBdev2", 00:13:55.022 "uuid": "85b62184-7799-5463-9021-5693868fe4ed", 00:13:55.022 "is_configured": true, 00:13:55.022 "data_offset": 2048, 00:13:55.022 "data_size": 63488 00:13:55.022 }, 00:13:55.022 { 00:13:55.022 "name": "BaseBdev3", 00:13:55.022 "uuid": "caf61744-55f5-5aec-ac1c-81a70785b40f", 00:13:55.022 "is_configured": true, 00:13:55.022 "data_offset": 2048, 00:13:55.022 "data_size": 63488 00:13:55.022 } 00:13:55.022 ] 00:13:55.022 }' 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.022 10:42:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.591 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:55.591 10:42:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:55.591 [2024-10-30 10:42:17.042182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:56.526 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.527 "name": "raid_bdev1", 00:13:56.527 "uuid": "1869a81c-d3a7-4cf0-b44c-a929b837cc41", 00:13:56.527 "strip_size_kb": 64, 00:13:56.527 "state": "online", 00:13:56.527 "raid_level": "concat", 00:13:56.527 "superblock": true, 00:13:56.527 "num_base_bdevs": 3, 00:13:56.527 "num_base_bdevs_discovered": 3, 00:13:56.527 "num_base_bdevs_operational": 3, 00:13:56.527 "base_bdevs_list": [ 00:13:56.527 { 00:13:56.527 "name": "BaseBdev1", 00:13:56.527 "uuid": "e76ef27e-52c4-5a3e-b1b7-ffef63a34a9f", 00:13:56.527 "is_configured": true, 00:13:56.527 "data_offset": 2048, 00:13:56.527 "data_size": 63488 00:13:56.527 }, 00:13:56.527 { 00:13:56.527 "name": "BaseBdev2", 00:13:56.527 "uuid": "85b62184-7799-5463-9021-5693868fe4ed", 00:13:56.527 "is_configured": true, 00:13:56.527 "data_offset": 2048, 00:13:56.527 "data_size": 63488 00:13:56.527 }, 00:13:56.527 { 00:13:56.527 "name": "BaseBdev3", 00:13:56.527 "uuid": "caf61744-55f5-5aec-ac1c-81a70785b40f", 00:13:56.527 "is_configured": true, 00:13:56.527 "data_offset": 2048, 00:13:56.527 "data_size": 63488 00:13:56.527 } 00:13:56.527 ] 00:13:56.527 }' 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.527 10:42:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.093 [2024-10-30 10:42:18.456603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.093 [2024-10-30 10:42:18.456637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.093 { 00:13:57.093 "results": [ 00:13:57.093 { 00:13:57.093 "job": "raid_bdev1", 00:13:57.093 "core_mask": "0x1", 00:13:57.093 "workload": "randrw", 00:13:57.093 "percentage": 50, 00:13:57.093 "status": "finished", 00:13:57.093 "queue_depth": 1, 00:13:57.093 "io_size": 131072, 00:13:57.093 "runtime": 1.412035, 00:13:57.093 "iops": 11094.625841427443, 00:13:57.093 "mibps": 1386.8282301784304, 00:13:57.093 "io_failed": 1, 00:13:57.093 "io_timeout": 0, 00:13:57.093 "avg_latency_us": 125.63274607310095, 00:13:57.093 "min_latency_us": 37.70181818181818, 00:13:57.093 "max_latency_us": 1876.7127272727273 00:13:57.093 } 00:13:57.093 ], 00:13:57.093 "core_count": 1 00:13:57.093 } 00:13:57.093 [2024-10-30 10:42:18.460027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.093 [2024-10-30 10:42:18.460091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.093 [2024-10-30 10:42:18.460146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.093 [2024-10-30 10:42:18.460164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67477 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67477 ']' 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67477 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67477 00:13:57.093 killing process with pid 67477 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67477' 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67477 00:13:57.093 [2024-10-30 10:42:18.503019] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.093 10:42:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67477 00:13:57.351 [2024-10-30 10:42:18.728191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.az2l0YLNQb 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:58.729 ************************************ 00:13:58.729 END TEST raid_write_error_test 00:13:58.729 ************************************ 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:58.729 00:13:58.729 real 0m4.702s 00:13:58.729 user 0m5.825s 00:13:58.729 sys 0m0.574s 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:58.729 10:42:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.729 10:42:19 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:58.729 10:42:19 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:58.729 10:42:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:58.729 10:42:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:58.729 10:42:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.729 ************************************ 00:13:58.729 START TEST raid_state_function_test 00:13:58.729 ************************************ 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:58.729 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:58.730 Process raid pid: 67615 00:13:58.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67615 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67615' 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67615 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67615 ']' 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:58.730 10:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.730 [2024-10-30 10:42:19.981790] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:13:58.730 [2024-10-30 10:42:19.982236] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.730 [2024-10-30 10:42:20.177691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.989 [2024-10-30 10:42:20.306571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.248 [2024-10-30 10:42:20.527150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.248 [2024-10-30 10:42:20.527193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.814 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:59.814 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:59.814 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:59.814 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.814 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.814 [2024-10-30 10:42:20.981304] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.814 [2024-10-30 10:42:20.981554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.814 [2024-10-30 10:42:20.981590] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.814 [2024-10-30 10:42:20.981609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.814 [2024-10-30 10:42:20.981619] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:59.814 [2024-10-30 10:42:20.981634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:59.814 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.814 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.815 10:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.815 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.815 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.815 "name": "Existed_Raid", 00:13:59.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.815 "strip_size_kb": 0, 00:13:59.815 "state": "configuring", 00:13:59.815 "raid_level": "raid1", 00:13:59.815 "superblock": false, 00:13:59.815 "num_base_bdevs": 3, 00:13:59.815 "num_base_bdevs_discovered": 0, 00:13:59.815 "num_base_bdevs_operational": 3, 00:13:59.815 "base_bdevs_list": [ 00:13:59.815 { 00:13:59.815 "name": "BaseBdev1", 00:13:59.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.815 "is_configured": false, 00:13:59.815 "data_offset": 0, 00:13:59.815 "data_size": 0 00:13:59.815 }, 00:13:59.815 { 00:13:59.815 "name": "BaseBdev2", 00:13:59.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.815 "is_configured": false, 00:13:59.815 "data_offset": 0, 00:13:59.815 "data_size": 0 00:13:59.815 }, 00:13:59.815 { 00:13:59.815 "name": "BaseBdev3", 00:13:59.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.815 "is_configured": false, 00:13:59.815 "data_offset": 0, 00:13:59.815 "data_size": 0 00:13:59.815 } 00:13:59.815 ] 00:13:59.815 }' 00:13:59.815 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.815 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.074 [2024-10-30 10:42:21.505399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.074 [2024-10-30 10:42:21.505571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.074 [2024-10-30 10:42:21.517388] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.074 [2024-10-30 10:42:21.517578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.074 [2024-10-30 10:42:21.517710] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.074 [2024-10-30 10:42:21.517772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.074 [2024-10-30 10:42:21.517884] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.074 [2024-10-30 10:42:21.517939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.074 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.332 [2024-10-30 10:42:21.565889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.332 BaseBdev1 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.332 [ 00:14:00.332 { 00:14:00.332 "name": "BaseBdev1", 00:14:00.332 "aliases": [ 00:14:00.332 "6940bf83-70bd-4e6c-8f5e-9919330e5cab" 00:14:00.332 ], 00:14:00.332 "product_name": "Malloc disk", 00:14:00.332 "block_size": 512, 00:14:00.332 "num_blocks": 65536, 00:14:00.332 "uuid": "6940bf83-70bd-4e6c-8f5e-9919330e5cab", 00:14:00.332 "assigned_rate_limits": { 00:14:00.332 "rw_ios_per_sec": 0, 00:14:00.332 "rw_mbytes_per_sec": 0, 00:14:00.332 "r_mbytes_per_sec": 0, 00:14:00.332 "w_mbytes_per_sec": 0 00:14:00.332 }, 00:14:00.332 "claimed": true, 00:14:00.332 "claim_type": "exclusive_write", 00:14:00.332 "zoned": false, 00:14:00.332 "supported_io_types": { 00:14:00.332 "read": true, 00:14:00.332 "write": true, 00:14:00.332 "unmap": true, 00:14:00.332 "flush": true, 00:14:00.332 "reset": true, 00:14:00.332 "nvme_admin": false, 00:14:00.332 "nvme_io": false, 00:14:00.332 "nvme_io_md": false, 00:14:00.332 "write_zeroes": true, 00:14:00.332 "zcopy": true, 00:14:00.332 "get_zone_info": false, 00:14:00.332 "zone_management": false, 00:14:00.332 "zone_append": false, 00:14:00.332 "compare": false, 00:14:00.332 "compare_and_write": false, 00:14:00.332 "abort": true, 00:14:00.332 "seek_hole": false, 00:14:00.332 "seek_data": false, 00:14:00.332 "copy": true, 00:14:00.332 "nvme_iov_md": false 00:14:00.332 }, 00:14:00.332 "memory_domains": [ 00:14:00.332 { 00:14:00.332 "dma_device_id": "system", 00:14:00.332 "dma_device_type": 1 00:14:00.332 }, 00:14:00.332 { 00:14:00.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.332 "dma_device_type": 2 00:14:00.332 } 00:14:00.332 ], 00:14:00.332 "driver_specific": {} 00:14:00.332 } 00:14:00.332 ] 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.332 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.333 "name": "Existed_Raid", 00:14:00.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.333 "strip_size_kb": 0, 00:14:00.333 "state": "configuring", 00:14:00.333 "raid_level": "raid1", 00:14:00.333 "superblock": false, 00:14:00.333 "num_base_bdevs": 3, 00:14:00.333 "num_base_bdevs_discovered": 1, 00:14:00.333 "num_base_bdevs_operational": 3, 00:14:00.333 "base_bdevs_list": [ 00:14:00.333 { 00:14:00.333 "name": "BaseBdev1", 00:14:00.333 "uuid": "6940bf83-70bd-4e6c-8f5e-9919330e5cab", 00:14:00.333 "is_configured": true, 00:14:00.333 "data_offset": 0, 00:14:00.333 "data_size": 65536 00:14:00.333 }, 00:14:00.333 { 00:14:00.333 "name": "BaseBdev2", 00:14:00.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.333 "is_configured": false, 00:14:00.333 "data_offset": 0, 00:14:00.333 "data_size": 0 00:14:00.333 }, 00:14:00.333 { 00:14:00.333 "name": "BaseBdev3", 00:14:00.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.333 "is_configured": false, 00:14:00.333 "data_offset": 0, 00:14:00.333 "data_size": 0 00:14:00.333 } 00:14:00.333 ] 00:14:00.333 }' 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.333 10:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.901 [2024-10-30 10:42:22.122114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.901 [2024-10-30 10:42:22.122189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.901 [2024-10-30 10:42:22.130143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.901 [2024-10-30 10:42:22.132583] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.901 [2024-10-30 10:42:22.132654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.901 [2024-10-30 10:42:22.132671] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.901 [2024-10-30 10:42:22.132687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:00.901 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.902 "name": "Existed_Raid", 00:14:00.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.902 "strip_size_kb": 0, 00:14:00.902 "state": "configuring", 00:14:00.902 "raid_level": "raid1", 00:14:00.902 "superblock": false, 00:14:00.902 "num_base_bdevs": 3, 00:14:00.902 "num_base_bdevs_discovered": 1, 00:14:00.902 "num_base_bdevs_operational": 3, 00:14:00.902 "base_bdevs_list": [ 00:14:00.902 { 00:14:00.902 "name": "BaseBdev1", 00:14:00.902 "uuid": "6940bf83-70bd-4e6c-8f5e-9919330e5cab", 00:14:00.902 "is_configured": true, 00:14:00.902 "data_offset": 0, 00:14:00.902 "data_size": 65536 00:14:00.902 }, 00:14:00.902 { 00:14:00.902 "name": "BaseBdev2", 00:14:00.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.902 "is_configured": false, 00:14:00.902 "data_offset": 0, 00:14:00.902 "data_size": 0 00:14:00.902 }, 00:14:00.902 { 00:14:00.902 "name": "BaseBdev3", 00:14:00.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.902 "is_configured": false, 00:14:00.902 "data_offset": 0, 00:14:00.902 "data_size": 0 00:14:00.902 } 00:14:00.902 ] 00:14:00.902 }' 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.902 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.470 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:01.470 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.470 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.470 [2024-10-30 10:42:22.716126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.470 BaseBdev2 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.471 [ 00:14:01.471 { 00:14:01.471 "name": "BaseBdev2", 00:14:01.471 "aliases": [ 00:14:01.471 "ecf2ff40-8205-4275-8225-5864f271423a" 00:14:01.471 ], 00:14:01.471 "product_name": "Malloc disk", 00:14:01.471 "block_size": 512, 00:14:01.471 "num_blocks": 65536, 00:14:01.471 "uuid": "ecf2ff40-8205-4275-8225-5864f271423a", 00:14:01.471 "assigned_rate_limits": { 00:14:01.471 "rw_ios_per_sec": 0, 00:14:01.471 "rw_mbytes_per_sec": 0, 00:14:01.471 "r_mbytes_per_sec": 0, 00:14:01.471 "w_mbytes_per_sec": 0 00:14:01.471 }, 00:14:01.471 "claimed": true, 00:14:01.471 "claim_type": "exclusive_write", 00:14:01.471 "zoned": false, 00:14:01.471 "supported_io_types": { 00:14:01.471 "read": true, 00:14:01.471 "write": true, 00:14:01.471 "unmap": true, 00:14:01.471 "flush": true, 00:14:01.471 "reset": true, 00:14:01.471 "nvme_admin": false, 00:14:01.471 "nvme_io": false, 00:14:01.471 "nvme_io_md": false, 00:14:01.471 "write_zeroes": true, 00:14:01.471 "zcopy": true, 00:14:01.471 "get_zone_info": false, 00:14:01.471 "zone_management": false, 00:14:01.471 "zone_append": false, 00:14:01.471 "compare": false, 00:14:01.471 "compare_and_write": false, 00:14:01.471 "abort": true, 00:14:01.471 "seek_hole": false, 00:14:01.471 "seek_data": false, 00:14:01.471 "copy": true, 00:14:01.471 "nvme_iov_md": false 00:14:01.471 }, 00:14:01.471 "memory_domains": [ 00:14:01.471 { 00:14:01.471 "dma_device_id": "system", 00:14:01.471 "dma_device_type": 1 00:14:01.471 }, 00:14:01.471 { 00:14:01.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.471 "dma_device_type": 2 00:14:01.471 } 00:14:01.471 ], 00:14:01.471 "driver_specific": {} 00:14:01.471 } 00:14:01.471 ] 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.471 "name": "Existed_Raid", 00:14:01.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.471 "strip_size_kb": 0, 00:14:01.471 "state": "configuring", 00:14:01.471 "raid_level": "raid1", 00:14:01.471 "superblock": false, 00:14:01.471 "num_base_bdevs": 3, 00:14:01.471 "num_base_bdevs_discovered": 2, 00:14:01.471 "num_base_bdevs_operational": 3, 00:14:01.471 "base_bdevs_list": [ 00:14:01.471 { 00:14:01.471 "name": "BaseBdev1", 00:14:01.471 "uuid": "6940bf83-70bd-4e6c-8f5e-9919330e5cab", 00:14:01.471 "is_configured": true, 00:14:01.471 "data_offset": 0, 00:14:01.471 "data_size": 65536 00:14:01.471 }, 00:14:01.471 { 00:14:01.471 "name": "BaseBdev2", 00:14:01.471 "uuid": "ecf2ff40-8205-4275-8225-5864f271423a", 00:14:01.471 "is_configured": true, 00:14:01.471 "data_offset": 0, 00:14:01.471 "data_size": 65536 00:14:01.471 }, 00:14:01.471 { 00:14:01.471 "name": "BaseBdev3", 00:14:01.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.471 "is_configured": false, 00:14:01.471 "data_offset": 0, 00:14:01.471 "data_size": 0 00:14:01.471 } 00:14:01.471 ] 00:14:01.471 }' 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.471 10:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.039 [2024-10-30 10:42:23.309975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.039 [2024-10-30 10:42:23.310090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:02.039 [2024-10-30 10:42:23.310112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:02.039 [2024-10-30 10:42:23.310483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:02.039 [2024-10-30 10:42:23.310710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:02.039 [2024-10-30 10:42:23.310735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:02.039 [2024-10-30 10:42:23.311074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.039 BaseBdev3 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.039 [ 00:14:02.039 { 00:14:02.039 "name": "BaseBdev3", 00:14:02.039 "aliases": [ 00:14:02.039 "d1b38ad3-9ac4-425e-96ee-730413e20c8c" 00:14:02.039 ], 00:14:02.039 "product_name": "Malloc disk", 00:14:02.039 "block_size": 512, 00:14:02.039 "num_blocks": 65536, 00:14:02.039 "uuid": "d1b38ad3-9ac4-425e-96ee-730413e20c8c", 00:14:02.039 "assigned_rate_limits": { 00:14:02.039 "rw_ios_per_sec": 0, 00:14:02.039 "rw_mbytes_per_sec": 0, 00:14:02.039 "r_mbytes_per_sec": 0, 00:14:02.039 "w_mbytes_per_sec": 0 00:14:02.039 }, 00:14:02.039 "claimed": true, 00:14:02.039 "claim_type": "exclusive_write", 00:14:02.039 "zoned": false, 00:14:02.039 "supported_io_types": { 00:14:02.039 "read": true, 00:14:02.039 "write": true, 00:14:02.039 "unmap": true, 00:14:02.039 "flush": true, 00:14:02.039 "reset": true, 00:14:02.039 "nvme_admin": false, 00:14:02.039 "nvme_io": false, 00:14:02.039 "nvme_io_md": false, 00:14:02.039 "write_zeroes": true, 00:14:02.039 "zcopy": true, 00:14:02.039 "get_zone_info": false, 00:14:02.039 "zone_management": false, 00:14:02.039 "zone_append": false, 00:14:02.039 "compare": false, 00:14:02.039 "compare_and_write": false, 00:14:02.039 "abort": true, 00:14:02.039 "seek_hole": false, 00:14:02.039 "seek_data": false, 00:14:02.039 "copy": true, 00:14:02.039 "nvme_iov_md": false 00:14:02.039 }, 00:14:02.039 "memory_domains": [ 00:14:02.039 { 00:14:02.039 "dma_device_id": "system", 00:14:02.039 "dma_device_type": 1 00:14:02.039 }, 00:14:02.039 { 00:14:02.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.039 "dma_device_type": 2 00:14:02.039 } 00:14:02.039 ], 00:14:02.039 "driver_specific": {} 00:14:02.039 } 00:14:02.039 ] 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.039 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.040 "name": "Existed_Raid", 00:14:02.040 "uuid": "ccf4361c-e890-42ec-bc47-8d32f3360e99", 00:14:02.040 "strip_size_kb": 0, 00:14:02.040 "state": "online", 00:14:02.040 "raid_level": "raid1", 00:14:02.040 "superblock": false, 00:14:02.040 "num_base_bdevs": 3, 00:14:02.040 "num_base_bdevs_discovered": 3, 00:14:02.040 "num_base_bdevs_operational": 3, 00:14:02.040 "base_bdevs_list": [ 00:14:02.040 { 00:14:02.040 "name": "BaseBdev1", 00:14:02.040 "uuid": "6940bf83-70bd-4e6c-8f5e-9919330e5cab", 00:14:02.040 "is_configured": true, 00:14:02.040 "data_offset": 0, 00:14:02.040 "data_size": 65536 00:14:02.040 }, 00:14:02.040 { 00:14:02.040 "name": "BaseBdev2", 00:14:02.040 "uuid": "ecf2ff40-8205-4275-8225-5864f271423a", 00:14:02.040 "is_configured": true, 00:14:02.040 "data_offset": 0, 00:14:02.040 "data_size": 65536 00:14:02.040 }, 00:14:02.040 { 00:14:02.040 "name": "BaseBdev3", 00:14:02.040 "uuid": "d1b38ad3-9ac4-425e-96ee-730413e20c8c", 00:14:02.040 "is_configured": true, 00:14:02.040 "data_offset": 0, 00:14:02.040 "data_size": 65536 00:14:02.040 } 00:14:02.040 ] 00:14:02.040 }' 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.040 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.606 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.606 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:02.606 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:02.606 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:02.606 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:02.606 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:02.606 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:02.607 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:02.607 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.607 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.607 [2024-10-30 10:42:23.874588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.607 10:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.607 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:02.607 "name": "Existed_Raid", 00:14:02.607 "aliases": [ 00:14:02.607 "ccf4361c-e890-42ec-bc47-8d32f3360e99" 00:14:02.607 ], 00:14:02.607 "product_name": "Raid Volume", 00:14:02.607 "block_size": 512, 00:14:02.607 "num_blocks": 65536, 00:14:02.607 "uuid": "ccf4361c-e890-42ec-bc47-8d32f3360e99", 00:14:02.607 "assigned_rate_limits": { 00:14:02.607 "rw_ios_per_sec": 0, 00:14:02.607 "rw_mbytes_per_sec": 0, 00:14:02.607 "r_mbytes_per_sec": 0, 00:14:02.607 "w_mbytes_per_sec": 0 00:14:02.607 }, 00:14:02.607 "claimed": false, 00:14:02.607 "zoned": false, 00:14:02.607 "supported_io_types": { 00:14:02.607 "read": true, 00:14:02.607 "write": true, 00:14:02.607 "unmap": false, 00:14:02.607 "flush": false, 00:14:02.607 "reset": true, 00:14:02.607 "nvme_admin": false, 00:14:02.607 "nvme_io": false, 00:14:02.607 "nvme_io_md": false, 00:14:02.607 "write_zeroes": true, 00:14:02.607 "zcopy": false, 00:14:02.607 "get_zone_info": false, 00:14:02.607 "zone_management": false, 00:14:02.607 "zone_append": false, 00:14:02.607 "compare": false, 00:14:02.607 "compare_and_write": false, 00:14:02.607 "abort": false, 00:14:02.607 "seek_hole": false, 00:14:02.607 "seek_data": false, 00:14:02.607 "copy": false, 00:14:02.607 "nvme_iov_md": false 00:14:02.607 }, 00:14:02.607 "memory_domains": [ 00:14:02.607 { 00:14:02.607 "dma_device_id": "system", 00:14:02.607 "dma_device_type": 1 00:14:02.607 }, 00:14:02.607 { 00:14:02.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.607 "dma_device_type": 2 00:14:02.607 }, 00:14:02.607 { 00:14:02.607 "dma_device_id": "system", 00:14:02.607 "dma_device_type": 1 00:14:02.607 }, 00:14:02.607 { 00:14:02.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.607 "dma_device_type": 2 00:14:02.607 }, 00:14:02.607 { 00:14:02.607 "dma_device_id": "system", 00:14:02.607 "dma_device_type": 1 00:14:02.607 }, 00:14:02.607 { 00:14:02.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.607 "dma_device_type": 2 00:14:02.607 } 00:14:02.607 ], 00:14:02.607 "driver_specific": { 00:14:02.607 "raid": { 00:14:02.607 "uuid": "ccf4361c-e890-42ec-bc47-8d32f3360e99", 00:14:02.607 "strip_size_kb": 0, 00:14:02.607 "state": "online", 00:14:02.607 "raid_level": "raid1", 00:14:02.607 "superblock": false, 00:14:02.607 "num_base_bdevs": 3, 00:14:02.607 "num_base_bdevs_discovered": 3, 00:14:02.607 "num_base_bdevs_operational": 3, 00:14:02.607 "base_bdevs_list": [ 00:14:02.607 { 00:14:02.607 "name": "BaseBdev1", 00:14:02.607 "uuid": "6940bf83-70bd-4e6c-8f5e-9919330e5cab", 00:14:02.607 "is_configured": true, 00:14:02.607 "data_offset": 0, 00:14:02.607 "data_size": 65536 00:14:02.607 }, 00:14:02.607 { 00:14:02.607 "name": "BaseBdev2", 00:14:02.607 "uuid": "ecf2ff40-8205-4275-8225-5864f271423a", 00:14:02.607 "is_configured": true, 00:14:02.607 "data_offset": 0, 00:14:02.607 "data_size": 65536 00:14:02.607 }, 00:14:02.607 { 00:14:02.607 "name": "BaseBdev3", 00:14:02.607 "uuid": "d1b38ad3-9ac4-425e-96ee-730413e20c8c", 00:14:02.607 "is_configured": true, 00:14:02.607 "data_offset": 0, 00:14:02.607 "data_size": 65536 00:14:02.607 } 00:14:02.607 ] 00:14:02.607 } 00:14:02.607 } 00:14:02.607 }' 00:14:02.607 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.607 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:02.607 BaseBdev2 00:14:02.607 BaseBdev3' 00:14:02.607 10:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.607 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.866 [2024-10-30 10:42:24.182332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.866 "name": "Existed_Raid", 00:14:02.866 "uuid": "ccf4361c-e890-42ec-bc47-8d32f3360e99", 00:14:02.866 "strip_size_kb": 0, 00:14:02.866 "state": "online", 00:14:02.866 "raid_level": "raid1", 00:14:02.866 "superblock": false, 00:14:02.866 "num_base_bdevs": 3, 00:14:02.866 "num_base_bdevs_discovered": 2, 00:14:02.866 "num_base_bdevs_operational": 2, 00:14:02.866 "base_bdevs_list": [ 00:14:02.866 { 00:14:02.866 "name": null, 00:14:02.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.866 "is_configured": false, 00:14:02.866 "data_offset": 0, 00:14:02.866 "data_size": 65536 00:14:02.866 }, 00:14:02.866 { 00:14:02.866 "name": "BaseBdev2", 00:14:02.866 "uuid": "ecf2ff40-8205-4275-8225-5864f271423a", 00:14:02.866 "is_configured": true, 00:14:02.866 "data_offset": 0, 00:14:02.866 "data_size": 65536 00:14:02.866 }, 00:14:02.866 { 00:14:02.866 "name": "BaseBdev3", 00:14:02.866 "uuid": "d1b38ad3-9ac4-425e-96ee-730413e20c8c", 00:14:02.866 "is_configured": true, 00:14:02.866 "data_offset": 0, 00:14:02.866 "data_size": 65536 00:14:02.866 } 00:14:02.866 ] 00:14:02.866 }' 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.866 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.434 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.434 [2024-10-30 10:42:24.846365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.692 10:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.692 [2024-10-30 10:42:24.988242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:03.692 [2024-10-30 10:42:24.988400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.692 [2024-10-30 10:42:25.069188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.692 [2024-10-30 10:42:25.069242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.693 [2024-10-30 10:42:25.069268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.693 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.951 BaseBdev2 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.951 [ 00:14:03.951 { 00:14:03.951 "name": "BaseBdev2", 00:14:03.951 "aliases": [ 00:14:03.951 "6e277ca4-207a-4234-92b4-f63bfcbaa18d" 00:14:03.951 ], 00:14:03.951 "product_name": "Malloc disk", 00:14:03.951 "block_size": 512, 00:14:03.951 "num_blocks": 65536, 00:14:03.951 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:03.951 "assigned_rate_limits": { 00:14:03.951 "rw_ios_per_sec": 0, 00:14:03.951 "rw_mbytes_per_sec": 0, 00:14:03.951 "r_mbytes_per_sec": 0, 00:14:03.951 "w_mbytes_per_sec": 0 00:14:03.951 }, 00:14:03.951 "claimed": false, 00:14:03.951 "zoned": false, 00:14:03.951 "supported_io_types": { 00:14:03.951 "read": true, 00:14:03.951 "write": true, 00:14:03.951 "unmap": true, 00:14:03.951 "flush": true, 00:14:03.951 "reset": true, 00:14:03.951 "nvme_admin": false, 00:14:03.951 "nvme_io": false, 00:14:03.951 "nvme_io_md": false, 00:14:03.951 "write_zeroes": true, 00:14:03.951 "zcopy": true, 00:14:03.951 "get_zone_info": false, 00:14:03.951 "zone_management": false, 00:14:03.951 "zone_append": false, 00:14:03.951 "compare": false, 00:14:03.951 "compare_and_write": false, 00:14:03.951 "abort": true, 00:14:03.951 "seek_hole": false, 00:14:03.951 "seek_data": false, 00:14:03.951 "copy": true, 00:14:03.951 "nvme_iov_md": false 00:14:03.951 }, 00:14:03.951 "memory_domains": [ 00:14:03.951 { 00:14:03.951 "dma_device_id": "system", 00:14:03.951 "dma_device_type": 1 00:14:03.951 }, 00:14:03.951 { 00:14:03.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.951 "dma_device_type": 2 00:14:03.951 } 00:14:03.951 ], 00:14:03.951 "driver_specific": {} 00:14:03.951 } 00:14:03.951 ] 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.951 BaseBdev3 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.951 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.951 [ 00:14:03.951 { 00:14:03.951 "name": "BaseBdev3", 00:14:03.951 "aliases": [ 00:14:03.951 "9980f436-ab6c-4f14-a1fe-cbd5db8200bf" 00:14:03.951 ], 00:14:03.951 "product_name": "Malloc disk", 00:14:03.951 "block_size": 512, 00:14:03.951 "num_blocks": 65536, 00:14:03.951 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:03.951 "assigned_rate_limits": { 00:14:03.951 "rw_ios_per_sec": 0, 00:14:03.952 "rw_mbytes_per_sec": 0, 00:14:03.952 "r_mbytes_per_sec": 0, 00:14:03.952 "w_mbytes_per_sec": 0 00:14:03.952 }, 00:14:03.952 "claimed": false, 00:14:03.952 "zoned": false, 00:14:03.952 "supported_io_types": { 00:14:03.952 "read": true, 00:14:03.952 "write": true, 00:14:03.952 "unmap": true, 00:14:03.952 "flush": true, 00:14:03.952 "reset": true, 00:14:03.952 "nvme_admin": false, 00:14:03.952 "nvme_io": false, 00:14:03.952 "nvme_io_md": false, 00:14:03.952 "write_zeroes": true, 00:14:03.952 "zcopy": true, 00:14:03.952 "get_zone_info": false, 00:14:03.952 "zone_management": false, 00:14:03.952 "zone_append": false, 00:14:03.952 "compare": false, 00:14:03.952 "compare_and_write": false, 00:14:03.952 "abort": true, 00:14:03.952 "seek_hole": false, 00:14:03.952 "seek_data": false, 00:14:03.952 "copy": true, 00:14:03.952 "nvme_iov_md": false 00:14:03.952 }, 00:14:03.952 "memory_domains": [ 00:14:03.952 { 00:14:03.952 "dma_device_id": "system", 00:14:03.952 "dma_device_type": 1 00:14:03.952 }, 00:14:03.952 { 00:14:03.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.952 "dma_device_type": 2 00:14:03.952 } 00:14:03.952 ], 00:14:03.952 "driver_specific": {} 00:14:03.952 } 00:14:03.952 ] 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.952 [2024-10-30 10:42:25.267730] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.952 [2024-10-30 10:42:25.267952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.952 [2024-10-30 10:42:25.268010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.952 [2024-10-30 10:42:25.270637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.952 "name": "Existed_Raid", 00:14:03.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.952 "strip_size_kb": 0, 00:14:03.952 "state": "configuring", 00:14:03.952 "raid_level": "raid1", 00:14:03.952 "superblock": false, 00:14:03.952 "num_base_bdevs": 3, 00:14:03.952 "num_base_bdevs_discovered": 2, 00:14:03.952 "num_base_bdevs_operational": 3, 00:14:03.952 "base_bdevs_list": [ 00:14:03.952 { 00:14:03.952 "name": "BaseBdev1", 00:14:03.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.952 "is_configured": false, 00:14:03.952 "data_offset": 0, 00:14:03.952 "data_size": 0 00:14:03.952 }, 00:14:03.952 { 00:14:03.952 "name": "BaseBdev2", 00:14:03.952 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:03.952 "is_configured": true, 00:14:03.952 "data_offset": 0, 00:14:03.952 "data_size": 65536 00:14:03.952 }, 00:14:03.952 { 00:14:03.952 "name": "BaseBdev3", 00:14:03.952 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:03.952 "is_configured": true, 00:14:03.952 "data_offset": 0, 00:14:03.952 "data_size": 65536 00:14:03.952 } 00:14:03.952 ] 00:14:03.952 }' 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.952 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.518 [2024-10-30 10:42:25.811942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.518 "name": "Existed_Raid", 00:14:04.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.518 "strip_size_kb": 0, 00:14:04.518 "state": "configuring", 00:14:04.518 "raid_level": "raid1", 00:14:04.518 "superblock": false, 00:14:04.518 "num_base_bdevs": 3, 00:14:04.518 "num_base_bdevs_discovered": 1, 00:14:04.518 "num_base_bdevs_operational": 3, 00:14:04.518 "base_bdevs_list": [ 00:14:04.518 { 00:14:04.518 "name": "BaseBdev1", 00:14:04.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.518 "is_configured": false, 00:14:04.518 "data_offset": 0, 00:14:04.518 "data_size": 0 00:14:04.518 }, 00:14:04.518 { 00:14:04.518 "name": null, 00:14:04.518 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:04.518 "is_configured": false, 00:14:04.518 "data_offset": 0, 00:14:04.518 "data_size": 65536 00:14:04.518 }, 00:14:04.518 { 00:14:04.518 "name": "BaseBdev3", 00:14:04.518 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:04.518 "is_configured": true, 00:14:04.518 "data_offset": 0, 00:14:04.518 "data_size": 65536 00:14:04.518 } 00:14:04.518 ] 00:14:04.518 }' 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.518 10:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.086 [2024-10-30 10:42:26.417288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.086 BaseBdev1 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:05.086 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.087 [ 00:14:05.087 { 00:14:05.087 "name": "BaseBdev1", 00:14:05.087 "aliases": [ 00:14:05.087 "762911b2-77f5-465b-883b-57ac863b25b1" 00:14:05.087 ], 00:14:05.087 "product_name": "Malloc disk", 00:14:05.087 "block_size": 512, 00:14:05.087 "num_blocks": 65536, 00:14:05.087 "uuid": "762911b2-77f5-465b-883b-57ac863b25b1", 00:14:05.087 "assigned_rate_limits": { 00:14:05.087 "rw_ios_per_sec": 0, 00:14:05.087 "rw_mbytes_per_sec": 0, 00:14:05.087 "r_mbytes_per_sec": 0, 00:14:05.087 "w_mbytes_per_sec": 0 00:14:05.087 }, 00:14:05.087 "claimed": true, 00:14:05.087 "claim_type": "exclusive_write", 00:14:05.087 "zoned": false, 00:14:05.087 "supported_io_types": { 00:14:05.087 "read": true, 00:14:05.087 "write": true, 00:14:05.087 "unmap": true, 00:14:05.087 "flush": true, 00:14:05.087 "reset": true, 00:14:05.087 "nvme_admin": false, 00:14:05.087 "nvme_io": false, 00:14:05.087 "nvme_io_md": false, 00:14:05.087 "write_zeroes": true, 00:14:05.087 "zcopy": true, 00:14:05.087 "get_zone_info": false, 00:14:05.087 "zone_management": false, 00:14:05.087 "zone_append": false, 00:14:05.087 "compare": false, 00:14:05.087 "compare_and_write": false, 00:14:05.087 "abort": true, 00:14:05.087 "seek_hole": false, 00:14:05.087 "seek_data": false, 00:14:05.087 "copy": true, 00:14:05.087 "nvme_iov_md": false 00:14:05.087 }, 00:14:05.087 "memory_domains": [ 00:14:05.087 { 00:14:05.087 "dma_device_id": "system", 00:14:05.087 "dma_device_type": 1 00:14:05.087 }, 00:14:05.087 { 00:14:05.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.087 "dma_device_type": 2 00:14:05.087 } 00:14:05.087 ], 00:14:05.087 "driver_specific": {} 00:14:05.087 } 00:14:05.087 ] 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.087 "name": "Existed_Raid", 00:14:05.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.087 "strip_size_kb": 0, 00:14:05.087 "state": "configuring", 00:14:05.087 "raid_level": "raid1", 00:14:05.087 "superblock": false, 00:14:05.087 "num_base_bdevs": 3, 00:14:05.087 "num_base_bdevs_discovered": 2, 00:14:05.087 "num_base_bdevs_operational": 3, 00:14:05.087 "base_bdevs_list": [ 00:14:05.087 { 00:14:05.087 "name": "BaseBdev1", 00:14:05.087 "uuid": "762911b2-77f5-465b-883b-57ac863b25b1", 00:14:05.087 "is_configured": true, 00:14:05.087 "data_offset": 0, 00:14:05.087 "data_size": 65536 00:14:05.087 }, 00:14:05.087 { 00:14:05.087 "name": null, 00:14:05.087 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:05.087 "is_configured": false, 00:14:05.087 "data_offset": 0, 00:14:05.087 "data_size": 65536 00:14:05.087 }, 00:14:05.087 { 00:14:05.087 "name": "BaseBdev3", 00:14:05.087 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:05.087 "is_configured": true, 00:14:05.087 "data_offset": 0, 00:14:05.087 "data_size": 65536 00:14:05.087 } 00:14:05.087 ] 00:14:05.087 }' 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.087 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.655 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.655 10:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:05.655 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.655 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.655 10:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.655 [2024-10-30 10:42:27.029494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.655 "name": "Existed_Raid", 00:14:05.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.655 "strip_size_kb": 0, 00:14:05.655 "state": "configuring", 00:14:05.655 "raid_level": "raid1", 00:14:05.655 "superblock": false, 00:14:05.655 "num_base_bdevs": 3, 00:14:05.655 "num_base_bdevs_discovered": 1, 00:14:05.655 "num_base_bdevs_operational": 3, 00:14:05.655 "base_bdevs_list": [ 00:14:05.655 { 00:14:05.655 "name": "BaseBdev1", 00:14:05.655 "uuid": "762911b2-77f5-465b-883b-57ac863b25b1", 00:14:05.655 "is_configured": true, 00:14:05.655 "data_offset": 0, 00:14:05.655 "data_size": 65536 00:14:05.655 }, 00:14:05.655 { 00:14:05.655 "name": null, 00:14:05.655 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:05.655 "is_configured": false, 00:14:05.655 "data_offset": 0, 00:14:05.655 "data_size": 65536 00:14:05.655 }, 00:14:05.655 { 00:14:05.655 "name": null, 00:14:05.655 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:05.655 "is_configured": false, 00:14:05.655 "data_offset": 0, 00:14:05.655 "data_size": 65536 00:14:05.655 } 00:14:05.655 ] 00:14:05.655 }' 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.655 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.226 [2024-10-30 10:42:27.597787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.226 "name": "Existed_Raid", 00:14:06.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.226 "strip_size_kb": 0, 00:14:06.226 "state": "configuring", 00:14:06.226 "raid_level": "raid1", 00:14:06.226 "superblock": false, 00:14:06.226 "num_base_bdevs": 3, 00:14:06.226 "num_base_bdevs_discovered": 2, 00:14:06.226 "num_base_bdevs_operational": 3, 00:14:06.226 "base_bdevs_list": [ 00:14:06.226 { 00:14:06.226 "name": "BaseBdev1", 00:14:06.226 "uuid": "762911b2-77f5-465b-883b-57ac863b25b1", 00:14:06.226 "is_configured": true, 00:14:06.226 "data_offset": 0, 00:14:06.226 "data_size": 65536 00:14:06.226 }, 00:14:06.226 { 00:14:06.226 "name": null, 00:14:06.226 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:06.226 "is_configured": false, 00:14:06.226 "data_offset": 0, 00:14:06.226 "data_size": 65536 00:14:06.226 }, 00:14:06.226 { 00:14:06.226 "name": "BaseBdev3", 00:14:06.226 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:06.226 "is_configured": true, 00:14:06.226 "data_offset": 0, 00:14:06.226 "data_size": 65536 00:14:06.226 } 00:14:06.226 ] 00:14:06.226 }' 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.226 10:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.792 [2024-10-30 10:42:28.169891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.792 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.051 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.051 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.051 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.051 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.051 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.051 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.051 "name": "Existed_Raid", 00:14:07.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.051 "strip_size_kb": 0, 00:14:07.051 "state": "configuring", 00:14:07.051 "raid_level": "raid1", 00:14:07.051 "superblock": false, 00:14:07.051 "num_base_bdevs": 3, 00:14:07.051 "num_base_bdevs_discovered": 1, 00:14:07.051 "num_base_bdevs_operational": 3, 00:14:07.051 "base_bdevs_list": [ 00:14:07.051 { 00:14:07.051 "name": null, 00:14:07.051 "uuid": "762911b2-77f5-465b-883b-57ac863b25b1", 00:14:07.051 "is_configured": false, 00:14:07.051 "data_offset": 0, 00:14:07.051 "data_size": 65536 00:14:07.051 }, 00:14:07.051 { 00:14:07.051 "name": null, 00:14:07.051 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:07.051 "is_configured": false, 00:14:07.051 "data_offset": 0, 00:14:07.051 "data_size": 65536 00:14:07.051 }, 00:14:07.051 { 00:14:07.051 "name": "BaseBdev3", 00:14:07.051 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:07.051 "is_configured": true, 00:14:07.051 "data_offset": 0, 00:14:07.051 "data_size": 65536 00:14:07.051 } 00:14:07.051 ] 00:14:07.051 }' 00:14:07.051 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.051 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.310 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.569 [2024-10-30 10:42:28.837210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.569 "name": "Existed_Raid", 00:14:07.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.569 "strip_size_kb": 0, 00:14:07.569 "state": "configuring", 00:14:07.569 "raid_level": "raid1", 00:14:07.569 "superblock": false, 00:14:07.569 "num_base_bdevs": 3, 00:14:07.569 "num_base_bdevs_discovered": 2, 00:14:07.569 "num_base_bdevs_operational": 3, 00:14:07.569 "base_bdevs_list": [ 00:14:07.569 { 00:14:07.569 "name": null, 00:14:07.569 "uuid": "762911b2-77f5-465b-883b-57ac863b25b1", 00:14:07.569 "is_configured": false, 00:14:07.569 "data_offset": 0, 00:14:07.569 "data_size": 65536 00:14:07.569 }, 00:14:07.569 { 00:14:07.569 "name": "BaseBdev2", 00:14:07.569 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:07.569 "is_configured": true, 00:14:07.569 "data_offset": 0, 00:14:07.569 "data_size": 65536 00:14:07.569 }, 00:14:07.569 { 00:14:07.569 "name": "BaseBdev3", 00:14:07.569 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:07.569 "is_configured": true, 00:14:07.569 "data_offset": 0, 00:14:07.569 "data_size": 65536 00:14:07.569 } 00:14:07.569 ] 00:14:07.569 }' 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.569 10:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 762911b2-77f5-465b-883b-57ac863b25b1 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 [2024-10-30 10:42:29.530673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:08.139 [2024-10-30 10:42:29.530737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:08.139 [2024-10-30 10:42:29.530749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:08.139 [2024-10-30 10:42:29.531120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:08.139 NewBaseBdev 00:14:08.139 [2024-10-30 10:42:29.531325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:08.139 [2024-10-30 10:42:29.531353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:08.139 [2024-10-30 10:42:29.531641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 [ 00:14:08.139 { 00:14:08.139 "name": "NewBaseBdev", 00:14:08.139 "aliases": [ 00:14:08.139 "762911b2-77f5-465b-883b-57ac863b25b1" 00:14:08.139 ], 00:14:08.139 "product_name": "Malloc disk", 00:14:08.139 "block_size": 512, 00:14:08.139 "num_blocks": 65536, 00:14:08.139 "uuid": "762911b2-77f5-465b-883b-57ac863b25b1", 00:14:08.139 "assigned_rate_limits": { 00:14:08.139 "rw_ios_per_sec": 0, 00:14:08.139 "rw_mbytes_per_sec": 0, 00:14:08.139 "r_mbytes_per_sec": 0, 00:14:08.139 "w_mbytes_per_sec": 0 00:14:08.139 }, 00:14:08.139 "claimed": true, 00:14:08.139 "claim_type": "exclusive_write", 00:14:08.139 "zoned": false, 00:14:08.139 "supported_io_types": { 00:14:08.139 "read": true, 00:14:08.139 "write": true, 00:14:08.139 "unmap": true, 00:14:08.139 "flush": true, 00:14:08.139 "reset": true, 00:14:08.139 "nvme_admin": false, 00:14:08.139 "nvme_io": false, 00:14:08.139 "nvme_io_md": false, 00:14:08.139 "write_zeroes": true, 00:14:08.139 "zcopy": true, 00:14:08.139 "get_zone_info": false, 00:14:08.139 "zone_management": false, 00:14:08.139 "zone_append": false, 00:14:08.139 "compare": false, 00:14:08.139 "compare_and_write": false, 00:14:08.139 "abort": true, 00:14:08.139 "seek_hole": false, 00:14:08.139 "seek_data": false, 00:14:08.139 "copy": true, 00:14:08.139 "nvme_iov_md": false 00:14:08.139 }, 00:14:08.139 "memory_domains": [ 00:14:08.139 { 00:14:08.139 "dma_device_id": "system", 00:14:08.139 "dma_device_type": 1 00:14:08.139 }, 00:14:08.139 { 00:14:08.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.139 "dma_device_type": 2 00:14:08.139 } 00:14:08.139 ], 00:14:08.139 "driver_specific": {} 00:14:08.139 } 00:14:08.139 ] 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.409 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.409 "name": "Existed_Raid", 00:14:08.409 "uuid": "dc1e5ac8-b554-4b32-b23b-73f2773563cb", 00:14:08.409 "strip_size_kb": 0, 00:14:08.409 "state": "online", 00:14:08.409 "raid_level": "raid1", 00:14:08.409 "superblock": false, 00:14:08.409 "num_base_bdevs": 3, 00:14:08.409 "num_base_bdevs_discovered": 3, 00:14:08.409 "num_base_bdevs_operational": 3, 00:14:08.409 "base_bdevs_list": [ 00:14:08.409 { 00:14:08.409 "name": "NewBaseBdev", 00:14:08.409 "uuid": "762911b2-77f5-465b-883b-57ac863b25b1", 00:14:08.409 "is_configured": true, 00:14:08.409 "data_offset": 0, 00:14:08.409 "data_size": 65536 00:14:08.409 }, 00:14:08.409 { 00:14:08.409 "name": "BaseBdev2", 00:14:08.409 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:08.409 "is_configured": true, 00:14:08.409 "data_offset": 0, 00:14:08.409 "data_size": 65536 00:14:08.409 }, 00:14:08.409 { 00:14:08.409 "name": "BaseBdev3", 00:14:08.409 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:08.409 "is_configured": true, 00:14:08.409 "data_offset": 0, 00:14:08.409 "data_size": 65536 00:14:08.409 } 00:14:08.409 ] 00:14:08.409 }' 00:14:08.409 10:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.409 10:42:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.668 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.668 [2024-10-30 10:42:30.115300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:08.927 "name": "Existed_Raid", 00:14:08.927 "aliases": [ 00:14:08.927 "dc1e5ac8-b554-4b32-b23b-73f2773563cb" 00:14:08.927 ], 00:14:08.927 "product_name": "Raid Volume", 00:14:08.927 "block_size": 512, 00:14:08.927 "num_blocks": 65536, 00:14:08.927 "uuid": "dc1e5ac8-b554-4b32-b23b-73f2773563cb", 00:14:08.927 "assigned_rate_limits": { 00:14:08.927 "rw_ios_per_sec": 0, 00:14:08.927 "rw_mbytes_per_sec": 0, 00:14:08.927 "r_mbytes_per_sec": 0, 00:14:08.927 "w_mbytes_per_sec": 0 00:14:08.927 }, 00:14:08.927 "claimed": false, 00:14:08.927 "zoned": false, 00:14:08.927 "supported_io_types": { 00:14:08.927 "read": true, 00:14:08.927 "write": true, 00:14:08.927 "unmap": false, 00:14:08.927 "flush": false, 00:14:08.927 "reset": true, 00:14:08.927 "nvme_admin": false, 00:14:08.927 "nvme_io": false, 00:14:08.927 "nvme_io_md": false, 00:14:08.927 "write_zeroes": true, 00:14:08.927 "zcopy": false, 00:14:08.927 "get_zone_info": false, 00:14:08.927 "zone_management": false, 00:14:08.927 "zone_append": false, 00:14:08.927 "compare": false, 00:14:08.927 "compare_and_write": false, 00:14:08.927 "abort": false, 00:14:08.927 "seek_hole": false, 00:14:08.927 "seek_data": false, 00:14:08.927 "copy": false, 00:14:08.927 "nvme_iov_md": false 00:14:08.927 }, 00:14:08.927 "memory_domains": [ 00:14:08.927 { 00:14:08.927 "dma_device_id": "system", 00:14:08.927 "dma_device_type": 1 00:14:08.927 }, 00:14:08.927 { 00:14:08.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.927 "dma_device_type": 2 00:14:08.927 }, 00:14:08.927 { 00:14:08.927 "dma_device_id": "system", 00:14:08.927 "dma_device_type": 1 00:14:08.927 }, 00:14:08.927 { 00:14:08.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.927 "dma_device_type": 2 00:14:08.927 }, 00:14:08.927 { 00:14:08.927 "dma_device_id": "system", 00:14:08.927 "dma_device_type": 1 00:14:08.927 }, 00:14:08.927 { 00:14:08.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.927 "dma_device_type": 2 00:14:08.927 } 00:14:08.927 ], 00:14:08.927 "driver_specific": { 00:14:08.927 "raid": { 00:14:08.927 "uuid": "dc1e5ac8-b554-4b32-b23b-73f2773563cb", 00:14:08.927 "strip_size_kb": 0, 00:14:08.927 "state": "online", 00:14:08.927 "raid_level": "raid1", 00:14:08.927 "superblock": false, 00:14:08.927 "num_base_bdevs": 3, 00:14:08.927 "num_base_bdevs_discovered": 3, 00:14:08.927 "num_base_bdevs_operational": 3, 00:14:08.927 "base_bdevs_list": [ 00:14:08.927 { 00:14:08.927 "name": "NewBaseBdev", 00:14:08.927 "uuid": "762911b2-77f5-465b-883b-57ac863b25b1", 00:14:08.927 "is_configured": true, 00:14:08.927 "data_offset": 0, 00:14:08.927 "data_size": 65536 00:14:08.927 }, 00:14:08.927 { 00:14:08.927 "name": "BaseBdev2", 00:14:08.927 "uuid": "6e277ca4-207a-4234-92b4-f63bfcbaa18d", 00:14:08.927 "is_configured": true, 00:14:08.927 "data_offset": 0, 00:14:08.927 "data_size": 65536 00:14:08.927 }, 00:14:08.927 { 00:14:08.927 "name": "BaseBdev3", 00:14:08.927 "uuid": "9980f436-ab6c-4f14-a1fe-cbd5db8200bf", 00:14:08.927 "is_configured": true, 00:14:08.927 "data_offset": 0, 00:14:08.927 "data_size": 65536 00:14:08.927 } 00:14:08.927 ] 00:14:08.927 } 00:14:08.927 } 00:14:08.927 }' 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:08.927 BaseBdev2 00:14:08.927 BaseBdev3' 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.927 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.186 [2024-10-30 10:42:30.466998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.186 [2024-10-30 10:42:30.467167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.186 [2024-10-30 10:42:30.467354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.186 [2024-10-30 10:42:30.467815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.186 [2024-10-30 10:42:30.467959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67615 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67615 ']' 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67615 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67615 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67615' 00:14:09.186 killing process with pid 67615 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67615 00:14:09.186 [2024-10-30 10:42:30.504129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.186 10:42:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67615 00:14:09.442 [2024-10-30 10:42:30.774150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.377 ************************************ 00:14:10.377 END TEST raid_state_function_test 00:14:10.377 ************************************ 00:14:10.377 10:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:10.377 00:14:10.377 real 0m11.944s 00:14:10.377 user 0m19.906s 00:14:10.377 sys 0m1.609s 00:14:10.377 10:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.377 10:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.637 10:42:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:14:10.637 10:42:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:10.637 10:42:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.637 10:42:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.637 ************************************ 00:14:10.637 START TEST raid_state_function_test_sb 00:14:10.637 ************************************ 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68255 00:14:10.637 Process raid pid: 68255 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68255' 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68255 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68255 ']' 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.637 10:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:10.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.638 10:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.638 10:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:10.638 10:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.638 [2024-10-30 10:42:31.980897] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:14:10.638 [2024-10-30 10:42:31.981112] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.934 [2024-10-30 10:42:32.165834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.934 [2024-10-30 10:42:32.294851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.218 [2024-10-30 10:42:32.503404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.218 [2024-10-30 10:42:32.503452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.475 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.475 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:11.475 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:11.475 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.475 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.475 [2024-10-30 10:42:32.936661] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:11.475 [2024-10-30 10:42:32.936754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:11.475 [2024-10-30 10:42:32.936791] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.476 [2024-10-30 10:42:32.936808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.476 [2024-10-30 10:42:32.936819] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:11.476 [2024-10-30 10:42:32.936833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.476 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.737 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.737 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.737 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.737 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.737 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.737 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.737 "name": "Existed_Raid", 00:14:11.737 "uuid": "dcefeb41-bda3-4a8d-843b-138184a354cf", 00:14:11.737 "strip_size_kb": 0, 00:14:11.737 "state": "configuring", 00:14:11.737 "raid_level": "raid1", 00:14:11.737 "superblock": true, 00:14:11.737 "num_base_bdevs": 3, 00:14:11.737 "num_base_bdevs_discovered": 0, 00:14:11.737 "num_base_bdevs_operational": 3, 00:14:11.737 "base_bdevs_list": [ 00:14:11.737 { 00:14:11.737 "name": "BaseBdev1", 00:14:11.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.737 "is_configured": false, 00:14:11.737 "data_offset": 0, 00:14:11.737 "data_size": 0 00:14:11.737 }, 00:14:11.737 { 00:14:11.737 "name": "BaseBdev2", 00:14:11.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.737 "is_configured": false, 00:14:11.737 "data_offset": 0, 00:14:11.737 "data_size": 0 00:14:11.737 }, 00:14:11.737 { 00:14:11.737 "name": "BaseBdev3", 00:14:11.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.737 "is_configured": false, 00:14:11.737 "data_offset": 0, 00:14:11.737 "data_size": 0 00:14:11.737 } 00:14:11.737 ] 00:14:11.737 }' 00:14:11.737 10:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.737 10:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.304 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.304 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.304 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.304 [2024-10-30 10:42:33.472740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.304 [2024-10-30 10:42:33.472806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:12.304 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.304 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.305 [2024-10-30 10:42:33.480736] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.305 [2024-10-30 10:42:33.480835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.305 [2024-10-30 10:42:33.480851] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.305 [2024-10-30 10:42:33.480867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.305 [2024-10-30 10:42:33.480877] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:12.305 [2024-10-30 10:42:33.480892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.305 [2024-10-30 10:42:33.525453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.305 BaseBdev1 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.305 [ 00:14:12.305 { 00:14:12.305 "name": "BaseBdev1", 00:14:12.305 "aliases": [ 00:14:12.305 "9eebac9c-22f7-4554-8871-475b7d4d0b26" 00:14:12.305 ], 00:14:12.305 "product_name": "Malloc disk", 00:14:12.305 "block_size": 512, 00:14:12.305 "num_blocks": 65536, 00:14:12.305 "uuid": "9eebac9c-22f7-4554-8871-475b7d4d0b26", 00:14:12.305 "assigned_rate_limits": { 00:14:12.305 "rw_ios_per_sec": 0, 00:14:12.305 "rw_mbytes_per_sec": 0, 00:14:12.305 "r_mbytes_per_sec": 0, 00:14:12.305 "w_mbytes_per_sec": 0 00:14:12.305 }, 00:14:12.305 "claimed": true, 00:14:12.305 "claim_type": "exclusive_write", 00:14:12.305 "zoned": false, 00:14:12.305 "supported_io_types": { 00:14:12.305 "read": true, 00:14:12.305 "write": true, 00:14:12.305 "unmap": true, 00:14:12.305 "flush": true, 00:14:12.305 "reset": true, 00:14:12.305 "nvme_admin": false, 00:14:12.305 "nvme_io": false, 00:14:12.305 "nvme_io_md": false, 00:14:12.305 "write_zeroes": true, 00:14:12.305 "zcopy": true, 00:14:12.305 "get_zone_info": false, 00:14:12.305 "zone_management": false, 00:14:12.305 "zone_append": false, 00:14:12.305 "compare": false, 00:14:12.305 "compare_and_write": false, 00:14:12.305 "abort": true, 00:14:12.305 "seek_hole": false, 00:14:12.305 "seek_data": false, 00:14:12.305 "copy": true, 00:14:12.305 "nvme_iov_md": false 00:14:12.305 }, 00:14:12.305 "memory_domains": [ 00:14:12.305 { 00:14:12.305 "dma_device_id": "system", 00:14:12.305 "dma_device_type": 1 00:14:12.305 }, 00:14:12.305 { 00:14:12.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.305 "dma_device_type": 2 00:14:12.305 } 00:14:12.305 ], 00:14:12.305 "driver_specific": {} 00:14:12.305 } 00:14:12.305 ] 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.305 "name": "Existed_Raid", 00:14:12.305 "uuid": "d2a7fda4-0c12-4b47-a253-afb4fcda5c19", 00:14:12.305 "strip_size_kb": 0, 00:14:12.305 "state": "configuring", 00:14:12.305 "raid_level": "raid1", 00:14:12.305 "superblock": true, 00:14:12.305 "num_base_bdevs": 3, 00:14:12.305 "num_base_bdevs_discovered": 1, 00:14:12.305 "num_base_bdevs_operational": 3, 00:14:12.305 "base_bdevs_list": [ 00:14:12.305 { 00:14:12.305 "name": "BaseBdev1", 00:14:12.305 "uuid": "9eebac9c-22f7-4554-8871-475b7d4d0b26", 00:14:12.305 "is_configured": true, 00:14:12.305 "data_offset": 2048, 00:14:12.305 "data_size": 63488 00:14:12.305 }, 00:14:12.305 { 00:14:12.305 "name": "BaseBdev2", 00:14:12.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.305 "is_configured": false, 00:14:12.305 "data_offset": 0, 00:14:12.305 "data_size": 0 00:14:12.305 }, 00:14:12.305 { 00:14:12.305 "name": "BaseBdev3", 00:14:12.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.305 "is_configured": false, 00:14:12.305 "data_offset": 0, 00:14:12.305 "data_size": 0 00:14:12.305 } 00:14:12.305 ] 00:14:12.305 }' 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.305 10:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.872 [2024-10-30 10:42:34.057655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.872 [2024-10-30 10:42:34.057736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.872 [2024-10-30 10:42:34.065697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.872 [2024-10-30 10:42:34.068259] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.872 [2024-10-30 10:42:34.068319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.872 [2024-10-30 10:42:34.068337] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:12.872 [2024-10-30 10:42:34.068353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.872 "name": "Existed_Raid", 00:14:12.872 "uuid": "3b42461c-810e-473f-a64a-1a7fdee69ecf", 00:14:12.872 "strip_size_kb": 0, 00:14:12.872 "state": "configuring", 00:14:12.872 "raid_level": "raid1", 00:14:12.872 "superblock": true, 00:14:12.872 "num_base_bdevs": 3, 00:14:12.872 "num_base_bdevs_discovered": 1, 00:14:12.872 "num_base_bdevs_operational": 3, 00:14:12.872 "base_bdevs_list": [ 00:14:12.872 { 00:14:12.872 "name": "BaseBdev1", 00:14:12.872 "uuid": "9eebac9c-22f7-4554-8871-475b7d4d0b26", 00:14:12.872 "is_configured": true, 00:14:12.872 "data_offset": 2048, 00:14:12.872 "data_size": 63488 00:14:12.872 }, 00:14:12.872 { 00:14:12.872 "name": "BaseBdev2", 00:14:12.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.872 "is_configured": false, 00:14:12.872 "data_offset": 0, 00:14:12.872 "data_size": 0 00:14:12.872 }, 00:14:12.872 { 00:14:12.872 "name": "BaseBdev3", 00:14:12.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.872 "is_configured": false, 00:14:12.872 "data_offset": 0, 00:14:12.872 "data_size": 0 00:14:12.872 } 00:14:12.872 ] 00:14:12.872 }' 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.872 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.130 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:13.130 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.130 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.387 [2024-10-30 10:42:34.624074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.387 BaseBdev2 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.388 [ 00:14:13.388 { 00:14:13.388 "name": "BaseBdev2", 00:14:13.388 "aliases": [ 00:14:13.388 "7171ce32-b992-4284-bf57-efa1dde4592e" 00:14:13.388 ], 00:14:13.388 "product_name": "Malloc disk", 00:14:13.388 "block_size": 512, 00:14:13.388 "num_blocks": 65536, 00:14:13.388 "uuid": "7171ce32-b992-4284-bf57-efa1dde4592e", 00:14:13.388 "assigned_rate_limits": { 00:14:13.388 "rw_ios_per_sec": 0, 00:14:13.388 "rw_mbytes_per_sec": 0, 00:14:13.388 "r_mbytes_per_sec": 0, 00:14:13.388 "w_mbytes_per_sec": 0 00:14:13.388 }, 00:14:13.388 "claimed": true, 00:14:13.388 "claim_type": "exclusive_write", 00:14:13.388 "zoned": false, 00:14:13.388 "supported_io_types": { 00:14:13.388 "read": true, 00:14:13.388 "write": true, 00:14:13.388 "unmap": true, 00:14:13.388 "flush": true, 00:14:13.388 "reset": true, 00:14:13.388 "nvme_admin": false, 00:14:13.388 "nvme_io": false, 00:14:13.388 "nvme_io_md": false, 00:14:13.388 "write_zeroes": true, 00:14:13.388 "zcopy": true, 00:14:13.388 "get_zone_info": false, 00:14:13.388 "zone_management": false, 00:14:13.388 "zone_append": false, 00:14:13.388 "compare": false, 00:14:13.388 "compare_and_write": false, 00:14:13.388 "abort": true, 00:14:13.388 "seek_hole": false, 00:14:13.388 "seek_data": false, 00:14:13.388 "copy": true, 00:14:13.388 "nvme_iov_md": false 00:14:13.388 }, 00:14:13.388 "memory_domains": [ 00:14:13.388 { 00:14:13.388 "dma_device_id": "system", 00:14:13.388 "dma_device_type": 1 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.388 "dma_device_type": 2 00:14:13.388 } 00:14:13.388 ], 00:14:13.388 "driver_specific": {} 00:14:13.388 } 00:14:13.388 ] 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.388 "name": "Existed_Raid", 00:14:13.388 "uuid": "3b42461c-810e-473f-a64a-1a7fdee69ecf", 00:14:13.388 "strip_size_kb": 0, 00:14:13.388 "state": "configuring", 00:14:13.388 "raid_level": "raid1", 00:14:13.388 "superblock": true, 00:14:13.388 "num_base_bdevs": 3, 00:14:13.388 "num_base_bdevs_discovered": 2, 00:14:13.388 "num_base_bdevs_operational": 3, 00:14:13.388 "base_bdevs_list": [ 00:14:13.388 { 00:14:13.388 "name": "BaseBdev1", 00:14:13.388 "uuid": "9eebac9c-22f7-4554-8871-475b7d4d0b26", 00:14:13.388 "is_configured": true, 00:14:13.388 "data_offset": 2048, 00:14:13.388 "data_size": 63488 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "name": "BaseBdev2", 00:14:13.388 "uuid": "7171ce32-b992-4284-bf57-efa1dde4592e", 00:14:13.388 "is_configured": true, 00:14:13.388 "data_offset": 2048, 00:14:13.388 "data_size": 63488 00:14:13.388 }, 00:14:13.388 { 00:14:13.388 "name": "BaseBdev3", 00:14:13.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.388 "is_configured": false, 00:14:13.388 "data_offset": 0, 00:14:13.388 "data_size": 0 00:14:13.388 } 00:14:13.388 ] 00:14:13.388 }' 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.388 10:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.953 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.953 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.953 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.953 [2024-10-30 10:42:35.240783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.953 [2024-10-30 10:42:35.241135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:13.953 [2024-10-30 10:42:35.241165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:13.953 BaseBdev3 00:14:13.953 [2024-10-30 10:42:35.241503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:13.953 [2024-10-30 10:42:35.241708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:13.953 [2024-10-30 10:42:35.241730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:13.953 [2024-10-30 10:42:35.241917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.954 [ 00:14:13.954 { 00:14:13.954 "name": "BaseBdev3", 00:14:13.954 "aliases": [ 00:14:13.954 "390befcd-ab7c-4aee-9b70-68a1d995996c" 00:14:13.954 ], 00:14:13.954 "product_name": "Malloc disk", 00:14:13.954 "block_size": 512, 00:14:13.954 "num_blocks": 65536, 00:14:13.954 "uuid": "390befcd-ab7c-4aee-9b70-68a1d995996c", 00:14:13.954 "assigned_rate_limits": { 00:14:13.954 "rw_ios_per_sec": 0, 00:14:13.954 "rw_mbytes_per_sec": 0, 00:14:13.954 "r_mbytes_per_sec": 0, 00:14:13.954 "w_mbytes_per_sec": 0 00:14:13.954 }, 00:14:13.954 "claimed": true, 00:14:13.954 "claim_type": "exclusive_write", 00:14:13.954 "zoned": false, 00:14:13.954 "supported_io_types": { 00:14:13.954 "read": true, 00:14:13.954 "write": true, 00:14:13.954 "unmap": true, 00:14:13.954 "flush": true, 00:14:13.954 "reset": true, 00:14:13.954 "nvme_admin": false, 00:14:13.954 "nvme_io": false, 00:14:13.954 "nvme_io_md": false, 00:14:13.954 "write_zeroes": true, 00:14:13.954 "zcopy": true, 00:14:13.954 "get_zone_info": false, 00:14:13.954 "zone_management": false, 00:14:13.954 "zone_append": false, 00:14:13.954 "compare": false, 00:14:13.954 "compare_and_write": false, 00:14:13.954 "abort": true, 00:14:13.954 "seek_hole": false, 00:14:13.954 "seek_data": false, 00:14:13.954 "copy": true, 00:14:13.954 "nvme_iov_md": false 00:14:13.954 }, 00:14:13.954 "memory_domains": [ 00:14:13.954 { 00:14:13.954 "dma_device_id": "system", 00:14:13.954 "dma_device_type": 1 00:14:13.954 }, 00:14:13.954 { 00:14:13.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.954 "dma_device_type": 2 00:14:13.954 } 00:14:13.954 ], 00:14:13.954 "driver_specific": {} 00:14:13.954 } 00:14:13.954 ] 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.954 "name": "Existed_Raid", 00:14:13.954 "uuid": "3b42461c-810e-473f-a64a-1a7fdee69ecf", 00:14:13.954 "strip_size_kb": 0, 00:14:13.954 "state": "online", 00:14:13.954 "raid_level": "raid1", 00:14:13.954 "superblock": true, 00:14:13.954 "num_base_bdevs": 3, 00:14:13.954 "num_base_bdevs_discovered": 3, 00:14:13.954 "num_base_bdevs_operational": 3, 00:14:13.954 "base_bdevs_list": [ 00:14:13.954 { 00:14:13.954 "name": "BaseBdev1", 00:14:13.954 "uuid": "9eebac9c-22f7-4554-8871-475b7d4d0b26", 00:14:13.954 "is_configured": true, 00:14:13.954 "data_offset": 2048, 00:14:13.954 "data_size": 63488 00:14:13.954 }, 00:14:13.954 { 00:14:13.954 "name": "BaseBdev2", 00:14:13.954 "uuid": "7171ce32-b992-4284-bf57-efa1dde4592e", 00:14:13.954 "is_configured": true, 00:14:13.954 "data_offset": 2048, 00:14:13.954 "data_size": 63488 00:14:13.954 }, 00:14:13.954 { 00:14:13.954 "name": "BaseBdev3", 00:14:13.954 "uuid": "390befcd-ab7c-4aee-9b70-68a1d995996c", 00:14:13.954 "is_configured": true, 00:14:13.954 "data_offset": 2048, 00:14:13.954 "data_size": 63488 00:14:13.954 } 00:14:13.954 ] 00:14:13.954 }' 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.954 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.520 [2024-10-30 10:42:35.773378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.520 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:14.520 "name": "Existed_Raid", 00:14:14.520 "aliases": [ 00:14:14.520 "3b42461c-810e-473f-a64a-1a7fdee69ecf" 00:14:14.520 ], 00:14:14.520 "product_name": "Raid Volume", 00:14:14.520 "block_size": 512, 00:14:14.520 "num_blocks": 63488, 00:14:14.520 "uuid": "3b42461c-810e-473f-a64a-1a7fdee69ecf", 00:14:14.520 "assigned_rate_limits": { 00:14:14.520 "rw_ios_per_sec": 0, 00:14:14.520 "rw_mbytes_per_sec": 0, 00:14:14.520 "r_mbytes_per_sec": 0, 00:14:14.520 "w_mbytes_per_sec": 0 00:14:14.520 }, 00:14:14.520 "claimed": false, 00:14:14.520 "zoned": false, 00:14:14.520 "supported_io_types": { 00:14:14.520 "read": true, 00:14:14.520 "write": true, 00:14:14.520 "unmap": false, 00:14:14.520 "flush": false, 00:14:14.520 "reset": true, 00:14:14.520 "nvme_admin": false, 00:14:14.520 "nvme_io": false, 00:14:14.520 "nvme_io_md": false, 00:14:14.520 "write_zeroes": true, 00:14:14.520 "zcopy": false, 00:14:14.520 "get_zone_info": false, 00:14:14.520 "zone_management": false, 00:14:14.520 "zone_append": false, 00:14:14.520 "compare": false, 00:14:14.520 "compare_and_write": false, 00:14:14.520 "abort": false, 00:14:14.520 "seek_hole": false, 00:14:14.520 "seek_data": false, 00:14:14.520 "copy": false, 00:14:14.520 "nvme_iov_md": false 00:14:14.520 }, 00:14:14.520 "memory_domains": [ 00:14:14.520 { 00:14:14.520 "dma_device_id": "system", 00:14:14.520 "dma_device_type": 1 00:14:14.520 }, 00:14:14.520 { 00:14:14.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.520 "dma_device_type": 2 00:14:14.520 }, 00:14:14.520 { 00:14:14.520 "dma_device_id": "system", 00:14:14.520 "dma_device_type": 1 00:14:14.520 }, 00:14:14.520 { 00:14:14.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.520 "dma_device_type": 2 00:14:14.520 }, 00:14:14.520 { 00:14:14.520 "dma_device_id": "system", 00:14:14.520 "dma_device_type": 1 00:14:14.520 }, 00:14:14.520 { 00:14:14.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.520 "dma_device_type": 2 00:14:14.520 } 00:14:14.520 ], 00:14:14.520 "driver_specific": { 00:14:14.520 "raid": { 00:14:14.520 "uuid": "3b42461c-810e-473f-a64a-1a7fdee69ecf", 00:14:14.520 "strip_size_kb": 0, 00:14:14.520 "state": "online", 00:14:14.521 "raid_level": "raid1", 00:14:14.521 "superblock": true, 00:14:14.521 "num_base_bdevs": 3, 00:14:14.521 "num_base_bdevs_discovered": 3, 00:14:14.521 "num_base_bdevs_operational": 3, 00:14:14.521 "base_bdevs_list": [ 00:14:14.521 { 00:14:14.521 "name": "BaseBdev1", 00:14:14.521 "uuid": "9eebac9c-22f7-4554-8871-475b7d4d0b26", 00:14:14.521 "is_configured": true, 00:14:14.521 "data_offset": 2048, 00:14:14.521 "data_size": 63488 00:14:14.521 }, 00:14:14.521 { 00:14:14.521 "name": "BaseBdev2", 00:14:14.521 "uuid": "7171ce32-b992-4284-bf57-efa1dde4592e", 00:14:14.521 "is_configured": true, 00:14:14.521 "data_offset": 2048, 00:14:14.521 "data_size": 63488 00:14:14.521 }, 00:14:14.521 { 00:14:14.521 "name": "BaseBdev3", 00:14:14.521 "uuid": "390befcd-ab7c-4aee-9b70-68a1d995996c", 00:14:14.521 "is_configured": true, 00:14:14.521 "data_offset": 2048, 00:14:14.521 "data_size": 63488 00:14:14.521 } 00:14:14.521 ] 00:14:14.521 } 00:14:14.521 } 00:14:14.521 }' 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:14.521 BaseBdev2 00:14:14.521 BaseBdev3' 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.521 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.778 10:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.778 [2024-10-30 10:42:36.089114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.778 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.779 "name": "Existed_Raid", 00:14:14.779 "uuid": "3b42461c-810e-473f-a64a-1a7fdee69ecf", 00:14:14.779 "strip_size_kb": 0, 00:14:14.779 "state": "online", 00:14:14.779 "raid_level": "raid1", 00:14:14.779 "superblock": true, 00:14:14.779 "num_base_bdevs": 3, 00:14:14.779 "num_base_bdevs_discovered": 2, 00:14:14.779 "num_base_bdevs_operational": 2, 00:14:14.779 "base_bdevs_list": [ 00:14:14.779 { 00:14:14.779 "name": null, 00:14:14.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.779 "is_configured": false, 00:14:14.779 "data_offset": 0, 00:14:14.779 "data_size": 63488 00:14:14.779 }, 00:14:14.779 { 00:14:14.779 "name": "BaseBdev2", 00:14:14.779 "uuid": "7171ce32-b992-4284-bf57-efa1dde4592e", 00:14:14.779 "is_configured": true, 00:14:14.779 "data_offset": 2048, 00:14:14.779 "data_size": 63488 00:14:14.779 }, 00:14:14.779 { 00:14:14.779 "name": "BaseBdev3", 00:14:14.779 "uuid": "390befcd-ab7c-4aee-9b70-68a1d995996c", 00:14:14.779 "is_configured": true, 00:14:14.779 "data_offset": 2048, 00:14:14.779 "data_size": 63488 00:14:14.779 } 00:14:14.779 ] 00:14:14.779 }' 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.779 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.346 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.346 [2024-10-30 10:42:36.757107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.604 [2024-10-30 10:42:36.902170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:15.604 [2024-10-30 10:42:36.902303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.604 [2024-10-30 10:42:36.985354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.604 [2024-10-30 10:42:36.985460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.604 [2024-10-30 10:42:36.985481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.604 10:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.604 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.604 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:15.604 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:15.604 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:15.604 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:15.604 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.604 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:15.604 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.604 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.900 BaseBdev2 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.900 [ 00:14:15.900 { 00:14:15.900 "name": "BaseBdev2", 00:14:15.900 "aliases": [ 00:14:15.900 "25b8701b-bdfd-4b85-ac80-080aa3135197" 00:14:15.900 ], 00:14:15.900 "product_name": "Malloc disk", 00:14:15.900 "block_size": 512, 00:14:15.900 "num_blocks": 65536, 00:14:15.900 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:15.900 "assigned_rate_limits": { 00:14:15.900 "rw_ios_per_sec": 0, 00:14:15.900 "rw_mbytes_per_sec": 0, 00:14:15.900 "r_mbytes_per_sec": 0, 00:14:15.900 "w_mbytes_per_sec": 0 00:14:15.900 }, 00:14:15.900 "claimed": false, 00:14:15.900 "zoned": false, 00:14:15.900 "supported_io_types": { 00:14:15.900 "read": true, 00:14:15.900 "write": true, 00:14:15.900 "unmap": true, 00:14:15.900 "flush": true, 00:14:15.900 "reset": true, 00:14:15.900 "nvme_admin": false, 00:14:15.900 "nvme_io": false, 00:14:15.900 "nvme_io_md": false, 00:14:15.900 "write_zeroes": true, 00:14:15.900 "zcopy": true, 00:14:15.900 "get_zone_info": false, 00:14:15.900 "zone_management": false, 00:14:15.900 "zone_append": false, 00:14:15.900 "compare": false, 00:14:15.900 "compare_and_write": false, 00:14:15.900 "abort": true, 00:14:15.900 "seek_hole": false, 00:14:15.900 "seek_data": false, 00:14:15.900 "copy": true, 00:14:15.900 "nvme_iov_md": false 00:14:15.900 }, 00:14:15.900 "memory_domains": [ 00:14:15.900 { 00:14:15.900 "dma_device_id": "system", 00:14:15.900 "dma_device_type": 1 00:14:15.900 }, 00:14:15.900 { 00:14:15.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.900 "dma_device_type": 2 00:14:15.900 } 00:14:15.900 ], 00:14:15.900 "driver_specific": {} 00:14:15.900 } 00:14:15.900 ] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.900 BaseBdev3 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.900 [ 00:14:15.900 { 00:14:15.900 "name": "BaseBdev3", 00:14:15.900 "aliases": [ 00:14:15.900 "d97af8b5-e1be-4316-a678-a1bcca637fb8" 00:14:15.900 ], 00:14:15.900 "product_name": "Malloc disk", 00:14:15.900 "block_size": 512, 00:14:15.900 "num_blocks": 65536, 00:14:15.900 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:15.900 "assigned_rate_limits": { 00:14:15.900 "rw_ios_per_sec": 0, 00:14:15.900 "rw_mbytes_per_sec": 0, 00:14:15.900 "r_mbytes_per_sec": 0, 00:14:15.900 "w_mbytes_per_sec": 0 00:14:15.900 }, 00:14:15.900 "claimed": false, 00:14:15.900 "zoned": false, 00:14:15.900 "supported_io_types": { 00:14:15.900 "read": true, 00:14:15.900 "write": true, 00:14:15.900 "unmap": true, 00:14:15.900 "flush": true, 00:14:15.900 "reset": true, 00:14:15.900 "nvme_admin": false, 00:14:15.900 "nvme_io": false, 00:14:15.900 "nvme_io_md": false, 00:14:15.900 "write_zeroes": true, 00:14:15.900 "zcopy": true, 00:14:15.900 "get_zone_info": false, 00:14:15.900 "zone_management": false, 00:14:15.900 "zone_append": false, 00:14:15.900 "compare": false, 00:14:15.900 "compare_and_write": false, 00:14:15.900 "abort": true, 00:14:15.900 "seek_hole": false, 00:14:15.900 "seek_data": false, 00:14:15.900 "copy": true, 00:14:15.900 "nvme_iov_md": false 00:14:15.900 }, 00:14:15.900 "memory_domains": [ 00:14:15.900 { 00:14:15.900 "dma_device_id": "system", 00:14:15.900 "dma_device_type": 1 00:14:15.900 }, 00:14:15.900 { 00:14:15.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.900 "dma_device_type": 2 00:14:15.900 } 00:14:15.900 ], 00:14:15.900 "driver_specific": {} 00:14:15.900 } 00:14:15.900 ] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.900 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.901 [2024-10-30 10:42:37.197027] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.901 [2024-10-30 10:42:37.197087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.901 [2024-10-30 10:42:37.197117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.901 [2024-10-30 10:42:37.199565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.901 "name": "Existed_Raid", 00:14:15.901 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:15.901 "strip_size_kb": 0, 00:14:15.901 "state": "configuring", 00:14:15.901 "raid_level": "raid1", 00:14:15.901 "superblock": true, 00:14:15.901 "num_base_bdevs": 3, 00:14:15.901 "num_base_bdevs_discovered": 2, 00:14:15.901 "num_base_bdevs_operational": 3, 00:14:15.901 "base_bdevs_list": [ 00:14:15.901 { 00:14:15.901 "name": "BaseBdev1", 00:14:15.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.901 "is_configured": false, 00:14:15.901 "data_offset": 0, 00:14:15.901 "data_size": 0 00:14:15.901 }, 00:14:15.901 { 00:14:15.901 "name": "BaseBdev2", 00:14:15.901 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:15.901 "is_configured": true, 00:14:15.901 "data_offset": 2048, 00:14:15.901 "data_size": 63488 00:14:15.901 }, 00:14:15.901 { 00:14:15.901 "name": "BaseBdev3", 00:14:15.901 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:15.901 "is_configured": true, 00:14:15.901 "data_offset": 2048, 00:14:15.901 "data_size": 63488 00:14:15.901 } 00:14:15.901 ] 00:14:15.901 }' 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.901 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.469 [2024-10-30 10:42:37.721187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.469 "name": "Existed_Raid", 00:14:16.469 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:16.469 "strip_size_kb": 0, 00:14:16.469 "state": "configuring", 00:14:16.469 "raid_level": "raid1", 00:14:16.469 "superblock": true, 00:14:16.469 "num_base_bdevs": 3, 00:14:16.469 "num_base_bdevs_discovered": 1, 00:14:16.469 "num_base_bdevs_operational": 3, 00:14:16.469 "base_bdevs_list": [ 00:14:16.469 { 00:14:16.469 "name": "BaseBdev1", 00:14:16.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.469 "is_configured": false, 00:14:16.469 "data_offset": 0, 00:14:16.469 "data_size": 0 00:14:16.469 }, 00:14:16.469 { 00:14:16.469 "name": null, 00:14:16.469 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:16.469 "is_configured": false, 00:14:16.469 "data_offset": 0, 00:14:16.469 "data_size": 63488 00:14:16.469 }, 00:14:16.469 { 00:14:16.469 "name": "BaseBdev3", 00:14:16.469 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:16.469 "is_configured": true, 00:14:16.469 "data_offset": 2048, 00:14:16.469 "data_size": 63488 00:14:16.469 } 00:14:16.469 ] 00:14:16.469 }' 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.469 10:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.037 [2024-10-30 10:42:38.314826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.037 BaseBdev1 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.037 [ 00:14:17.037 { 00:14:17.037 "name": "BaseBdev1", 00:14:17.037 "aliases": [ 00:14:17.037 "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9" 00:14:17.037 ], 00:14:17.037 "product_name": "Malloc disk", 00:14:17.037 "block_size": 512, 00:14:17.037 "num_blocks": 65536, 00:14:17.037 "uuid": "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9", 00:14:17.037 "assigned_rate_limits": { 00:14:17.037 "rw_ios_per_sec": 0, 00:14:17.037 "rw_mbytes_per_sec": 0, 00:14:17.037 "r_mbytes_per_sec": 0, 00:14:17.037 "w_mbytes_per_sec": 0 00:14:17.037 }, 00:14:17.037 "claimed": true, 00:14:17.037 "claim_type": "exclusive_write", 00:14:17.037 "zoned": false, 00:14:17.037 "supported_io_types": { 00:14:17.037 "read": true, 00:14:17.037 "write": true, 00:14:17.037 "unmap": true, 00:14:17.037 "flush": true, 00:14:17.037 "reset": true, 00:14:17.037 "nvme_admin": false, 00:14:17.037 "nvme_io": false, 00:14:17.037 "nvme_io_md": false, 00:14:17.037 "write_zeroes": true, 00:14:17.037 "zcopy": true, 00:14:17.037 "get_zone_info": false, 00:14:17.037 "zone_management": false, 00:14:17.037 "zone_append": false, 00:14:17.037 "compare": false, 00:14:17.037 "compare_and_write": false, 00:14:17.037 "abort": true, 00:14:17.037 "seek_hole": false, 00:14:17.037 "seek_data": false, 00:14:17.037 "copy": true, 00:14:17.037 "nvme_iov_md": false 00:14:17.037 }, 00:14:17.037 "memory_domains": [ 00:14:17.037 { 00:14:17.037 "dma_device_id": "system", 00:14:17.037 "dma_device_type": 1 00:14:17.037 }, 00:14:17.037 { 00:14:17.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.037 "dma_device_type": 2 00:14:17.037 } 00:14:17.037 ], 00:14:17.037 "driver_specific": {} 00:14:17.037 } 00:14:17.037 ] 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.037 "name": "Existed_Raid", 00:14:17.037 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:17.037 "strip_size_kb": 0, 00:14:17.037 "state": "configuring", 00:14:17.037 "raid_level": "raid1", 00:14:17.037 "superblock": true, 00:14:17.037 "num_base_bdevs": 3, 00:14:17.037 "num_base_bdevs_discovered": 2, 00:14:17.037 "num_base_bdevs_operational": 3, 00:14:17.037 "base_bdevs_list": [ 00:14:17.037 { 00:14:17.037 "name": "BaseBdev1", 00:14:17.037 "uuid": "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9", 00:14:17.037 "is_configured": true, 00:14:17.037 "data_offset": 2048, 00:14:17.037 "data_size": 63488 00:14:17.037 }, 00:14:17.037 { 00:14:17.037 "name": null, 00:14:17.037 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:17.037 "is_configured": false, 00:14:17.037 "data_offset": 0, 00:14:17.037 "data_size": 63488 00:14:17.037 }, 00:14:17.037 { 00:14:17.037 "name": "BaseBdev3", 00:14:17.037 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:17.037 "is_configured": true, 00:14:17.037 "data_offset": 2048, 00:14:17.037 "data_size": 63488 00:14:17.037 } 00:14:17.037 ] 00:14:17.037 }' 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.037 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.605 [2024-10-30 10:42:38.911051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.605 "name": "Existed_Raid", 00:14:17.605 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:17.605 "strip_size_kb": 0, 00:14:17.605 "state": "configuring", 00:14:17.605 "raid_level": "raid1", 00:14:17.605 "superblock": true, 00:14:17.605 "num_base_bdevs": 3, 00:14:17.605 "num_base_bdevs_discovered": 1, 00:14:17.605 "num_base_bdevs_operational": 3, 00:14:17.605 "base_bdevs_list": [ 00:14:17.605 { 00:14:17.605 "name": "BaseBdev1", 00:14:17.605 "uuid": "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9", 00:14:17.605 "is_configured": true, 00:14:17.605 "data_offset": 2048, 00:14:17.605 "data_size": 63488 00:14:17.605 }, 00:14:17.605 { 00:14:17.605 "name": null, 00:14:17.605 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:17.605 "is_configured": false, 00:14:17.605 "data_offset": 0, 00:14:17.605 "data_size": 63488 00:14:17.605 }, 00:14:17.605 { 00:14:17.605 "name": null, 00:14:17.605 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:17.605 "is_configured": false, 00:14:17.605 "data_offset": 0, 00:14:17.605 "data_size": 63488 00:14:17.605 } 00:14:17.605 ] 00:14:17.605 }' 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.605 10:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.173 [2024-10-30 10:42:39.467268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.173 "name": "Existed_Raid", 00:14:18.173 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:18.173 "strip_size_kb": 0, 00:14:18.173 "state": "configuring", 00:14:18.173 "raid_level": "raid1", 00:14:18.173 "superblock": true, 00:14:18.173 "num_base_bdevs": 3, 00:14:18.173 "num_base_bdevs_discovered": 2, 00:14:18.173 "num_base_bdevs_operational": 3, 00:14:18.173 "base_bdevs_list": [ 00:14:18.173 { 00:14:18.173 "name": "BaseBdev1", 00:14:18.173 "uuid": "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9", 00:14:18.173 "is_configured": true, 00:14:18.173 "data_offset": 2048, 00:14:18.173 "data_size": 63488 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "name": null, 00:14:18.173 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:18.173 "is_configured": false, 00:14:18.173 "data_offset": 0, 00:14:18.173 "data_size": 63488 00:14:18.173 }, 00:14:18.173 { 00:14:18.173 "name": "BaseBdev3", 00:14:18.173 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:18.173 "is_configured": true, 00:14:18.173 "data_offset": 2048, 00:14:18.173 "data_size": 63488 00:14:18.173 } 00:14:18.173 ] 00:14:18.173 }' 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.173 10:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.738 [2024-10-30 10:42:40.067445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.738 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.996 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.996 "name": "Existed_Raid", 00:14:18.996 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:18.996 "strip_size_kb": 0, 00:14:18.996 "state": "configuring", 00:14:18.996 "raid_level": "raid1", 00:14:18.996 "superblock": true, 00:14:18.996 "num_base_bdevs": 3, 00:14:18.996 "num_base_bdevs_discovered": 1, 00:14:18.996 "num_base_bdevs_operational": 3, 00:14:18.996 "base_bdevs_list": [ 00:14:18.996 { 00:14:18.996 "name": null, 00:14:18.996 "uuid": "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9", 00:14:18.996 "is_configured": false, 00:14:18.996 "data_offset": 0, 00:14:18.996 "data_size": 63488 00:14:18.996 }, 00:14:18.996 { 00:14:18.996 "name": null, 00:14:18.996 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:18.996 "is_configured": false, 00:14:18.996 "data_offset": 0, 00:14:18.996 "data_size": 63488 00:14:18.996 }, 00:14:18.996 { 00:14:18.996 "name": "BaseBdev3", 00:14:18.996 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:18.996 "is_configured": true, 00:14:18.996 "data_offset": 2048, 00:14:18.996 "data_size": 63488 00:14:18.996 } 00:14:18.996 ] 00:14:18.996 }' 00:14:18.996 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.996 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.253 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.253 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.253 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.253 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.511 [2024-10-30 10:42:40.784562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.511 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.511 "name": "Existed_Raid", 00:14:19.511 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:19.511 "strip_size_kb": 0, 00:14:19.511 "state": "configuring", 00:14:19.511 "raid_level": "raid1", 00:14:19.511 "superblock": true, 00:14:19.511 "num_base_bdevs": 3, 00:14:19.511 "num_base_bdevs_discovered": 2, 00:14:19.511 "num_base_bdevs_operational": 3, 00:14:19.511 "base_bdevs_list": [ 00:14:19.511 { 00:14:19.511 "name": null, 00:14:19.511 "uuid": "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9", 00:14:19.511 "is_configured": false, 00:14:19.511 "data_offset": 0, 00:14:19.511 "data_size": 63488 00:14:19.511 }, 00:14:19.511 { 00:14:19.511 "name": "BaseBdev2", 00:14:19.512 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:19.512 "is_configured": true, 00:14:19.512 "data_offset": 2048, 00:14:19.512 "data_size": 63488 00:14:19.512 }, 00:14:19.512 { 00:14:19.512 "name": "BaseBdev3", 00:14:19.512 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:19.512 "is_configured": true, 00:14:19.512 "data_offset": 2048, 00:14:19.512 "data_size": 63488 00:14:19.512 } 00:14:19.512 ] 00:14:19.512 }' 00:14:19.512 10:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.512 10:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.080 [2024-10-30 10:42:41.453999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:20.080 [2024-10-30 10:42:41.454261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:20.080 [2024-10-30 10:42:41.454279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.080 [2024-10-30 10:42:41.454582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:20.080 NewBaseBdev 00:14:20.080 [2024-10-30 10:42:41.454770] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:20.080 [2024-10-30 10:42:41.454792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:20.080 [2024-10-30 10:42:41.454960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.080 [ 00:14:20.080 { 00:14:20.080 "name": "NewBaseBdev", 00:14:20.080 "aliases": [ 00:14:20.080 "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9" 00:14:20.080 ], 00:14:20.080 "product_name": "Malloc disk", 00:14:20.080 "block_size": 512, 00:14:20.080 "num_blocks": 65536, 00:14:20.080 "uuid": "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9", 00:14:20.080 "assigned_rate_limits": { 00:14:20.080 "rw_ios_per_sec": 0, 00:14:20.080 "rw_mbytes_per_sec": 0, 00:14:20.080 "r_mbytes_per_sec": 0, 00:14:20.080 "w_mbytes_per_sec": 0 00:14:20.080 }, 00:14:20.080 "claimed": true, 00:14:20.080 "claim_type": "exclusive_write", 00:14:20.080 "zoned": false, 00:14:20.080 "supported_io_types": { 00:14:20.080 "read": true, 00:14:20.080 "write": true, 00:14:20.080 "unmap": true, 00:14:20.080 "flush": true, 00:14:20.080 "reset": true, 00:14:20.080 "nvme_admin": false, 00:14:20.080 "nvme_io": false, 00:14:20.080 "nvme_io_md": false, 00:14:20.080 "write_zeroes": true, 00:14:20.080 "zcopy": true, 00:14:20.080 "get_zone_info": false, 00:14:20.080 "zone_management": false, 00:14:20.080 "zone_append": false, 00:14:20.080 "compare": false, 00:14:20.080 "compare_and_write": false, 00:14:20.080 "abort": true, 00:14:20.080 "seek_hole": false, 00:14:20.080 "seek_data": false, 00:14:20.080 "copy": true, 00:14:20.080 "nvme_iov_md": false 00:14:20.080 }, 00:14:20.080 "memory_domains": [ 00:14:20.080 { 00:14:20.080 "dma_device_id": "system", 00:14:20.080 "dma_device_type": 1 00:14:20.080 }, 00:14:20.080 { 00:14:20.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.080 "dma_device_type": 2 00:14:20.080 } 00:14:20.080 ], 00:14:20.080 "driver_specific": {} 00:14:20.080 } 00:14:20.080 ] 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.080 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.080 "name": "Existed_Raid", 00:14:20.080 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:20.080 "strip_size_kb": 0, 00:14:20.080 "state": "online", 00:14:20.080 "raid_level": "raid1", 00:14:20.080 "superblock": true, 00:14:20.080 "num_base_bdevs": 3, 00:14:20.080 "num_base_bdevs_discovered": 3, 00:14:20.080 "num_base_bdevs_operational": 3, 00:14:20.080 "base_bdevs_list": [ 00:14:20.080 { 00:14:20.080 "name": "NewBaseBdev", 00:14:20.081 "uuid": "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9", 00:14:20.081 "is_configured": true, 00:14:20.081 "data_offset": 2048, 00:14:20.081 "data_size": 63488 00:14:20.081 }, 00:14:20.081 { 00:14:20.081 "name": "BaseBdev2", 00:14:20.081 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:20.081 "is_configured": true, 00:14:20.081 "data_offset": 2048, 00:14:20.081 "data_size": 63488 00:14:20.081 }, 00:14:20.081 { 00:14:20.081 "name": "BaseBdev3", 00:14:20.081 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:20.081 "is_configured": true, 00:14:20.081 "data_offset": 2048, 00:14:20.081 "data_size": 63488 00:14:20.081 } 00:14:20.081 ] 00:14:20.081 }' 00:14:20.081 10:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.081 10:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.648 [2024-10-30 10:42:42.030551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:20.648 "name": "Existed_Raid", 00:14:20.648 "aliases": [ 00:14:20.648 "3fbb0e98-4650-4f43-9320-aaac3f972233" 00:14:20.648 ], 00:14:20.648 "product_name": "Raid Volume", 00:14:20.648 "block_size": 512, 00:14:20.648 "num_blocks": 63488, 00:14:20.648 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:20.648 "assigned_rate_limits": { 00:14:20.648 "rw_ios_per_sec": 0, 00:14:20.648 "rw_mbytes_per_sec": 0, 00:14:20.648 "r_mbytes_per_sec": 0, 00:14:20.648 "w_mbytes_per_sec": 0 00:14:20.648 }, 00:14:20.648 "claimed": false, 00:14:20.648 "zoned": false, 00:14:20.648 "supported_io_types": { 00:14:20.648 "read": true, 00:14:20.648 "write": true, 00:14:20.648 "unmap": false, 00:14:20.648 "flush": false, 00:14:20.648 "reset": true, 00:14:20.648 "nvme_admin": false, 00:14:20.648 "nvme_io": false, 00:14:20.648 "nvme_io_md": false, 00:14:20.648 "write_zeroes": true, 00:14:20.648 "zcopy": false, 00:14:20.648 "get_zone_info": false, 00:14:20.648 "zone_management": false, 00:14:20.648 "zone_append": false, 00:14:20.648 "compare": false, 00:14:20.648 "compare_and_write": false, 00:14:20.648 "abort": false, 00:14:20.648 "seek_hole": false, 00:14:20.648 "seek_data": false, 00:14:20.648 "copy": false, 00:14:20.648 "nvme_iov_md": false 00:14:20.648 }, 00:14:20.648 "memory_domains": [ 00:14:20.648 { 00:14:20.648 "dma_device_id": "system", 00:14:20.648 "dma_device_type": 1 00:14:20.648 }, 00:14:20.648 { 00:14:20.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.648 "dma_device_type": 2 00:14:20.648 }, 00:14:20.648 { 00:14:20.648 "dma_device_id": "system", 00:14:20.648 "dma_device_type": 1 00:14:20.648 }, 00:14:20.648 { 00:14:20.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.648 "dma_device_type": 2 00:14:20.648 }, 00:14:20.648 { 00:14:20.648 "dma_device_id": "system", 00:14:20.648 "dma_device_type": 1 00:14:20.648 }, 00:14:20.648 { 00:14:20.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.648 "dma_device_type": 2 00:14:20.648 } 00:14:20.648 ], 00:14:20.648 "driver_specific": { 00:14:20.648 "raid": { 00:14:20.648 "uuid": "3fbb0e98-4650-4f43-9320-aaac3f972233", 00:14:20.648 "strip_size_kb": 0, 00:14:20.648 "state": "online", 00:14:20.648 "raid_level": "raid1", 00:14:20.648 "superblock": true, 00:14:20.648 "num_base_bdevs": 3, 00:14:20.648 "num_base_bdevs_discovered": 3, 00:14:20.648 "num_base_bdevs_operational": 3, 00:14:20.648 "base_bdevs_list": [ 00:14:20.648 { 00:14:20.648 "name": "NewBaseBdev", 00:14:20.648 "uuid": "6cd7ee21-034b-4c9b-875b-2f6e1a50c4d9", 00:14:20.648 "is_configured": true, 00:14:20.648 "data_offset": 2048, 00:14:20.648 "data_size": 63488 00:14:20.648 }, 00:14:20.648 { 00:14:20.648 "name": "BaseBdev2", 00:14:20.648 "uuid": "25b8701b-bdfd-4b85-ac80-080aa3135197", 00:14:20.648 "is_configured": true, 00:14:20.648 "data_offset": 2048, 00:14:20.648 "data_size": 63488 00:14:20.648 }, 00:14:20.648 { 00:14:20.648 "name": "BaseBdev3", 00:14:20.648 "uuid": "d97af8b5-e1be-4316-a678-a1bcca637fb8", 00:14:20.648 "is_configured": true, 00:14:20.648 "data_offset": 2048, 00:14:20.648 "data_size": 63488 00:14:20.648 } 00:14:20.648 ] 00:14:20.648 } 00:14:20.648 } 00:14:20.648 }' 00:14:20.648 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:20.928 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:20.928 BaseBdev2 00:14:20.928 BaseBdev3' 00:14:20.928 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.929 [2024-10-30 10:42:42.334323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.929 [2024-10-30 10:42:42.334364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.929 [2024-10-30 10:42:42.334463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.929 [2024-10-30 10:42:42.334818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.929 [2024-10-30 10:42:42.334836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68255 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68255 ']' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68255 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68255 00:14:20.929 killing process with pid 68255 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68255' 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68255 00:14:20.929 [2024-10-30 10:42:42.369429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.929 10:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68255 00:14:21.187 [2024-10-30 10:42:42.635942] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.562 10:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:22.562 00:14:22.562 real 0m11.779s 00:14:22.562 user 0m19.665s 00:14:22.562 sys 0m1.544s 00:14:22.562 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:22.562 10:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.562 ************************************ 00:14:22.562 END TEST raid_state_function_test_sb 00:14:22.562 ************************************ 00:14:22.562 10:42:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:22.562 10:42:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:22.562 10:42:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:22.562 10:42:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:22.562 ************************************ 00:14:22.562 START TEST raid_superblock_test 00:14:22.562 ************************************ 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68892 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68892 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68892 ']' 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:22.562 10:42:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.562 [2024-10-30 10:42:43.808271] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:14:22.562 [2024-10-30 10:42:43.808725] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68892 ] 00:14:22.562 [2024-10-30 10:42:43.998563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.820 [2024-10-30 10:42:44.147799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.078 [2024-10-30 10:42:44.365467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.078 [2024-10-30 10:42:44.365545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.645 malloc1 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.645 [2024-10-30 10:42:44.864492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:23.645 [2024-10-30 10:42:44.864571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.645 [2024-10-30 10:42:44.864605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:23.645 [2024-10-30 10:42:44.864621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.645 [2024-10-30 10:42:44.867401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.645 [2024-10-30 10:42:44.867582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:23.645 pt1 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.645 malloc2 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.645 [2024-10-30 10:42:44.920350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:23.645 [2024-10-30 10:42:44.920418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.645 [2024-10-30 10:42:44.920450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:23.645 [2024-10-30 10:42:44.920464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.645 [2024-10-30 10:42:44.923152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.645 [2024-10-30 10:42:44.923197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:23.645 pt2 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.645 malloc3 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.645 [2024-10-30 10:42:44.985362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:23.645 [2024-10-30 10:42:44.985557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.645 [2024-10-30 10:42:44.985602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:23.645 [2024-10-30 10:42:44.985619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.645 [2024-10-30 10:42:44.988332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.645 [2024-10-30 10:42:44.988377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:23.645 pt3 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.645 10:42:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.645 [2024-10-30 10:42:44.997412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:23.645 [2024-10-30 10:42:44.999799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:23.645 [2024-10-30 10:42:45.000038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:23.645 [2024-10-30 10:42:45.000266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:23.645 [2024-10-30 10:42:45.000296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:23.645 [2024-10-30 10:42:45.000594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:23.645 [2024-10-30 10:42:45.000812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:23.645 [2024-10-30 10:42:45.000842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:23.645 [2024-10-30 10:42:45.001039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.645 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.645 "name": "raid_bdev1", 00:14:23.645 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:23.645 "strip_size_kb": 0, 00:14:23.645 "state": "online", 00:14:23.645 "raid_level": "raid1", 00:14:23.645 "superblock": true, 00:14:23.645 "num_base_bdevs": 3, 00:14:23.645 "num_base_bdevs_discovered": 3, 00:14:23.645 "num_base_bdevs_operational": 3, 00:14:23.645 "base_bdevs_list": [ 00:14:23.645 { 00:14:23.645 "name": "pt1", 00:14:23.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.645 "is_configured": true, 00:14:23.645 "data_offset": 2048, 00:14:23.645 "data_size": 63488 00:14:23.645 }, 00:14:23.645 { 00:14:23.645 "name": "pt2", 00:14:23.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.645 "is_configured": true, 00:14:23.645 "data_offset": 2048, 00:14:23.645 "data_size": 63488 00:14:23.645 }, 00:14:23.645 { 00:14:23.645 "name": "pt3", 00:14:23.645 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:23.645 "is_configured": true, 00:14:23.645 "data_offset": 2048, 00:14:23.646 "data_size": 63488 00:14:23.646 } 00:14:23.646 ] 00:14:23.646 }' 00:14:23.646 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.646 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.213 [2024-10-30 10:42:45.501892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.213 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:24.213 "name": "raid_bdev1", 00:14:24.213 "aliases": [ 00:14:24.213 "ff474059-59ee-4ead-97f6-b7b8a1749357" 00:14:24.213 ], 00:14:24.213 "product_name": "Raid Volume", 00:14:24.213 "block_size": 512, 00:14:24.213 "num_blocks": 63488, 00:14:24.213 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:24.213 "assigned_rate_limits": { 00:14:24.213 "rw_ios_per_sec": 0, 00:14:24.213 "rw_mbytes_per_sec": 0, 00:14:24.213 "r_mbytes_per_sec": 0, 00:14:24.213 "w_mbytes_per_sec": 0 00:14:24.213 }, 00:14:24.213 "claimed": false, 00:14:24.213 "zoned": false, 00:14:24.213 "supported_io_types": { 00:14:24.213 "read": true, 00:14:24.213 "write": true, 00:14:24.213 "unmap": false, 00:14:24.213 "flush": false, 00:14:24.213 "reset": true, 00:14:24.213 "nvme_admin": false, 00:14:24.213 "nvme_io": false, 00:14:24.213 "nvme_io_md": false, 00:14:24.213 "write_zeroes": true, 00:14:24.213 "zcopy": false, 00:14:24.213 "get_zone_info": false, 00:14:24.213 "zone_management": false, 00:14:24.213 "zone_append": false, 00:14:24.213 "compare": false, 00:14:24.213 "compare_and_write": false, 00:14:24.213 "abort": false, 00:14:24.213 "seek_hole": false, 00:14:24.213 "seek_data": false, 00:14:24.213 "copy": false, 00:14:24.213 "nvme_iov_md": false 00:14:24.213 }, 00:14:24.213 "memory_domains": [ 00:14:24.213 { 00:14:24.213 "dma_device_id": "system", 00:14:24.213 "dma_device_type": 1 00:14:24.213 }, 00:14:24.213 { 00:14:24.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.213 "dma_device_type": 2 00:14:24.213 }, 00:14:24.213 { 00:14:24.213 "dma_device_id": "system", 00:14:24.213 "dma_device_type": 1 00:14:24.213 }, 00:14:24.213 { 00:14:24.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.213 "dma_device_type": 2 00:14:24.213 }, 00:14:24.213 { 00:14:24.213 "dma_device_id": "system", 00:14:24.213 "dma_device_type": 1 00:14:24.213 }, 00:14:24.213 { 00:14:24.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.213 "dma_device_type": 2 00:14:24.213 } 00:14:24.213 ], 00:14:24.213 "driver_specific": { 00:14:24.213 "raid": { 00:14:24.213 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:24.213 "strip_size_kb": 0, 00:14:24.213 "state": "online", 00:14:24.213 "raid_level": "raid1", 00:14:24.213 "superblock": true, 00:14:24.213 "num_base_bdevs": 3, 00:14:24.213 "num_base_bdevs_discovered": 3, 00:14:24.213 "num_base_bdevs_operational": 3, 00:14:24.213 "base_bdevs_list": [ 00:14:24.213 { 00:14:24.213 "name": "pt1", 00:14:24.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.213 "is_configured": true, 00:14:24.213 "data_offset": 2048, 00:14:24.213 "data_size": 63488 00:14:24.213 }, 00:14:24.213 { 00:14:24.214 "name": "pt2", 00:14:24.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.214 "is_configured": true, 00:14:24.214 "data_offset": 2048, 00:14:24.214 "data_size": 63488 00:14:24.214 }, 00:14:24.214 { 00:14:24.214 "name": "pt3", 00:14:24.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.214 "is_configured": true, 00:14:24.214 "data_offset": 2048, 00:14:24.214 "data_size": 63488 00:14:24.214 } 00:14:24.214 ] 00:14:24.214 } 00:14:24.214 } 00:14:24.214 }' 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:24.214 pt2 00:14:24.214 pt3' 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.214 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.472 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:24.473 [2024-10-30 10:42:45.809883] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ff474059-59ee-4ead-97f6-b7b8a1749357 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ff474059-59ee-4ead-97f6-b7b8a1749357 ']' 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 [2024-10-30 10:42:45.849572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.473 [2024-10-30 10:42:45.849604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.473 [2024-10-30 10:42:45.849688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.473 [2024-10-30 10:42:45.849783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.473 [2024-10-30 10:42:45.849799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.473 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.732 10:42:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.732 [2024-10-30 10:42:45.997682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:24.732 [2024-10-30 10:42:46.000242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:24.732 [2024-10-30 10:42:46.000418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:24.732 [2024-10-30 10:42:46.000594] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:24.732 [2024-10-30 10:42:46.000793] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:24.732 [2024-10-30 10:42:46.001007] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:24.732 [2024-10-30 10:42:46.001183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.732 [2024-10-30 10:42:46.001231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:24.732 request: 00:14:24.732 { 00:14:24.732 "name": "raid_bdev1", 00:14:24.732 "raid_level": "raid1", 00:14:24.732 "base_bdevs": [ 00:14:24.732 "malloc1", 00:14:24.732 "malloc2", 00:14:24.732 "malloc3" 00:14:24.732 ], 00:14:24.732 "superblock": false, 00:14:24.732 "method": "bdev_raid_create", 00:14:24.732 "req_id": 1 00:14:24.732 } 00:14:24.732 Got JSON-RPC error response 00:14:24.732 response: 00:14:24.732 { 00:14:24.732 "code": -17, 00:14:24.732 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:24.732 } 00:14:24.732 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:24.732 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:24.732 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.733 [2024-10-30 10:42:46.057651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:24.733 [2024-10-30 10:42:46.057720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.733 [2024-10-30 10:42:46.057755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:24.733 [2024-10-30 10:42:46.057770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.733 [2024-10-30 10:42:46.060543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.733 [2024-10-30 10:42:46.060588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:24.733 [2024-10-30 10:42:46.060683] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:24.733 [2024-10-30 10:42:46.060746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:24.733 pt1 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.733 "name": "raid_bdev1", 00:14:24.733 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:24.733 "strip_size_kb": 0, 00:14:24.733 "state": "configuring", 00:14:24.733 "raid_level": "raid1", 00:14:24.733 "superblock": true, 00:14:24.733 "num_base_bdevs": 3, 00:14:24.733 "num_base_bdevs_discovered": 1, 00:14:24.733 "num_base_bdevs_operational": 3, 00:14:24.733 "base_bdevs_list": [ 00:14:24.733 { 00:14:24.733 "name": "pt1", 00:14:24.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.733 "is_configured": true, 00:14:24.733 "data_offset": 2048, 00:14:24.733 "data_size": 63488 00:14:24.733 }, 00:14:24.733 { 00:14:24.733 "name": null, 00:14:24.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.733 "is_configured": false, 00:14:24.733 "data_offset": 2048, 00:14:24.733 "data_size": 63488 00:14:24.733 }, 00:14:24.733 { 00:14:24.733 "name": null, 00:14:24.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.733 "is_configured": false, 00:14:24.733 "data_offset": 2048, 00:14:24.733 "data_size": 63488 00:14:24.733 } 00:14:24.733 ] 00:14:24.733 }' 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.733 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.299 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:25.299 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:25.299 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.299 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.299 [2024-10-30 10:42:46.557818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:25.299 [2024-10-30 10:42:46.557900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.299 [2024-10-30 10:42:46.557933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:25.300 [2024-10-30 10:42:46.557948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.300 [2024-10-30 10:42:46.558528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.300 [2024-10-30 10:42:46.558561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:25.300 [2024-10-30 10:42:46.558665] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:25.300 [2024-10-30 10:42:46.558696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:25.300 pt2 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.300 [2024-10-30 10:42:46.565805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.300 "name": "raid_bdev1", 00:14:25.300 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:25.300 "strip_size_kb": 0, 00:14:25.300 "state": "configuring", 00:14:25.300 "raid_level": "raid1", 00:14:25.300 "superblock": true, 00:14:25.300 "num_base_bdevs": 3, 00:14:25.300 "num_base_bdevs_discovered": 1, 00:14:25.300 "num_base_bdevs_operational": 3, 00:14:25.300 "base_bdevs_list": [ 00:14:25.300 { 00:14:25.300 "name": "pt1", 00:14:25.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.300 "is_configured": true, 00:14:25.300 "data_offset": 2048, 00:14:25.300 "data_size": 63488 00:14:25.300 }, 00:14:25.300 { 00:14:25.300 "name": null, 00:14:25.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.300 "is_configured": false, 00:14:25.300 "data_offset": 0, 00:14:25.300 "data_size": 63488 00:14:25.300 }, 00:14:25.300 { 00:14:25.300 "name": null, 00:14:25.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.300 "is_configured": false, 00:14:25.300 "data_offset": 2048, 00:14:25.300 "data_size": 63488 00:14:25.300 } 00:14:25.300 ] 00:14:25.300 }' 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.300 10:42:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.867 [2024-10-30 10:42:47.081958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:25.867 [2024-10-30 10:42:47.082058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.867 [2024-10-30 10:42:47.082086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:25.867 [2024-10-30 10:42:47.082103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.867 [2024-10-30 10:42:47.082662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.867 [2024-10-30 10:42:47.082703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:25.867 [2024-10-30 10:42:47.082800] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:25.867 [2024-10-30 10:42:47.082859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:25.867 pt2 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.867 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.867 [2024-10-30 10:42:47.089939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:25.867 [2024-10-30 10:42:47.090018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.867 [2024-10-30 10:42:47.090046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:25.867 [2024-10-30 10:42:47.090066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.867 [2024-10-30 10:42:47.090530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.867 [2024-10-30 10:42:47.090579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:25.867 [2024-10-30 10:42:47.090658] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:25.867 [2024-10-30 10:42:47.090690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:25.868 [2024-10-30 10:42:47.090840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:25.868 [2024-10-30 10:42:47.090869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:25.868 [2024-10-30 10:42:47.091196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:25.868 [2024-10-30 10:42:47.091566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:25.868 [2024-10-30 10:42:47.091590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:25.868 [2024-10-30 10:42:47.091763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.868 pt3 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.868 "name": "raid_bdev1", 00:14:25.868 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:25.868 "strip_size_kb": 0, 00:14:25.868 "state": "online", 00:14:25.868 "raid_level": "raid1", 00:14:25.868 "superblock": true, 00:14:25.868 "num_base_bdevs": 3, 00:14:25.868 "num_base_bdevs_discovered": 3, 00:14:25.868 "num_base_bdevs_operational": 3, 00:14:25.868 "base_bdevs_list": [ 00:14:25.868 { 00:14:25.868 "name": "pt1", 00:14:25.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.868 "is_configured": true, 00:14:25.868 "data_offset": 2048, 00:14:25.868 "data_size": 63488 00:14:25.868 }, 00:14:25.868 { 00:14:25.868 "name": "pt2", 00:14:25.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.868 "is_configured": true, 00:14:25.868 "data_offset": 2048, 00:14:25.868 "data_size": 63488 00:14:25.868 }, 00:14:25.868 { 00:14:25.868 "name": "pt3", 00:14:25.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.868 "is_configured": true, 00:14:25.868 "data_offset": 2048, 00:14:25.868 "data_size": 63488 00:14:25.868 } 00:14:25.868 ] 00:14:25.868 }' 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.868 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.126 [2024-10-30 10:42:47.566458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.126 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.385 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.385 "name": "raid_bdev1", 00:14:26.385 "aliases": [ 00:14:26.385 "ff474059-59ee-4ead-97f6-b7b8a1749357" 00:14:26.385 ], 00:14:26.385 "product_name": "Raid Volume", 00:14:26.385 "block_size": 512, 00:14:26.385 "num_blocks": 63488, 00:14:26.385 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:26.385 "assigned_rate_limits": { 00:14:26.385 "rw_ios_per_sec": 0, 00:14:26.385 "rw_mbytes_per_sec": 0, 00:14:26.385 "r_mbytes_per_sec": 0, 00:14:26.385 "w_mbytes_per_sec": 0 00:14:26.385 }, 00:14:26.385 "claimed": false, 00:14:26.385 "zoned": false, 00:14:26.385 "supported_io_types": { 00:14:26.385 "read": true, 00:14:26.385 "write": true, 00:14:26.385 "unmap": false, 00:14:26.385 "flush": false, 00:14:26.385 "reset": true, 00:14:26.385 "nvme_admin": false, 00:14:26.385 "nvme_io": false, 00:14:26.385 "nvme_io_md": false, 00:14:26.385 "write_zeroes": true, 00:14:26.385 "zcopy": false, 00:14:26.385 "get_zone_info": false, 00:14:26.385 "zone_management": false, 00:14:26.385 "zone_append": false, 00:14:26.385 "compare": false, 00:14:26.385 "compare_and_write": false, 00:14:26.385 "abort": false, 00:14:26.385 "seek_hole": false, 00:14:26.385 "seek_data": false, 00:14:26.385 "copy": false, 00:14:26.385 "nvme_iov_md": false 00:14:26.385 }, 00:14:26.385 "memory_domains": [ 00:14:26.385 { 00:14:26.385 "dma_device_id": "system", 00:14:26.385 "dma_device_type": 1 00:14:26.385 }, 00:14:26.385 { 00:14:26.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.385 "dma_device_type": 2 00:14:26.385 }, 00:14:26.385 { 00:14:26.385 "dma_device_id": "system", 00:14:26.385 "dma_device_type": 1 00:14:26.385 }, 00:14:26.385 { 00:14:26.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.385 "dma_device_type": 2 00:14:26.385 }, 00:14:26.385 { 00:14:26.385 "dma_device_id": "system", 00:14:26.386 "dma_device_type": 1 00:14:26.386 }, 00:14:26.386 { 00:14:26.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.386 "dma_device_type": 2 00:14:26.386 } 00:14:26.386 ], 00:14:26.386 "driver_specific": { 00:14:26.386 "raid": { 00:14:26.386 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:26.386 "strip_size_kb": 0, 00:14:26.386 "state": "online", 00:14:26.386 "raid_level": "raid1", 00:14:26.386 "superblock": true, 00:14:26.386 "num_base_bdevs": 3, 00:14:26.386 "num_base_bdevs_discovered": 3, 00:14:26.386 "num_base_bdevs_operational": 3, 00:14:26.386 "base_bdevs_list": [ 00:14:26.386 { 00:14:26.386 "name": "pt1", 00:14:26.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:26.386 "is_configured": true, 00:14:26.386 "data_offset": 2048, 00:14:26.386 "data_size": 63488 00:14:26.386 }, 00:14:26.386 { 00:14:26.386 "name": "pt2", 00:14:26.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.386 "is_configured": true, 00:14:26.386 "data_offset": 2048, 00:14:26.386 "data_size": 63488 00:14:26.386 }, 00:14:26.386 { 00:14:26.386 "name": "pt3", 00:14:26.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.386 "is_configured": true, 00:14:26.386 "data_offset": 2048, 00:14:26.386 "data_size": 63488 00:14:26.386 } 00:14:26.386 ] 00:14:26.386 } 00:14:26.386 } 00:14:26.386 }' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:26.386 pt2 00:14:26.386 pt3' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.386 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.645 [2024-10-30 10:42:47.862512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ff474059-59ee-4ead-97f6-b7b8a1749357 '!=' ff474059-59ee-4ead-97f6-b7b8a1749357 ']' 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.645 [2024-10-30 10:42:47.914216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.645 "name": "raid_bdev1", 00:14:26.645 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:26.645 "strip_size_kb": 0, 00:14:26.645 "state": "online", 00:14:26.645 "raid_level": "raid1", 00:14:26.645 "superblock": true, 00:14:26.645 "num_base_bdevs": 3, 00:14:26.645 "num_base_bdevs_discovered": 2, 00:14:26.645 "num_base_bdevs_operational": 2, 00:14:26.645 "base_bdevs_list": [ 00:14:26.645 { 00:14:26.645 "name": null, 00:14:26.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.645 "is_configured": false, 00:14:26.645 "data_offset": 0, 00:14:26.645 "data_size": 63488 00:14:26.645 }, 00:14:26.645 { 00:14:26.645 "name": "pt2", 00:14:26.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:26.645 "is_configured": true, 00:14:26.645 "data_offset": 2048, 00:14:26.645 "data_size": 63488 00:14:26.645 }, 00:14:26.645 { 00:14:26.645 "name": "pt3", 00:14:26.645 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:26.645 "is_configured": true, 00:14:26.645 "data_offset": 2048, 00:14:26.645 "data_size": 63488 00:14:26.645 } 00:14:26.645 ] 00:14:26.645 }' 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.645 10:42:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.212 [2024-10-30 10:42:48.418397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.212 [2024-10-30 10:42:48.418431] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.212 [2024-10-30 10:42:48.418517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.212 [2024-10-30 10:42:48.418588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.212 [2024-10-30 10:42:48.418609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.212 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.212 [2024-10-30 10:42:48.502351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.212 [2024-10-30 10:42:48.502429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.212 [2024-10-30 10:42:48.502454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:27.212 [2024-10-30 10:42:48.502470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.212 [2024-10-30 10:42:48.505318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.213 [2024-10-30 10:42:48.505369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.213 [2024-10-30 10:42:48.505462] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:27.213 [2024-10-30 10:42:48.505522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:27.213 pt2 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.213 "name": "raid_bdev1", 00:14:27.213 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:27.213 "strip_size_kb": 0, 00:14:27.213 "state": "configuring", 00:14:27.213 "raid_level": "raid1", 00:14:27.213 "superblock": true, 00:14:27.213 "num_base_bdevs": 3, 00:14:27.213 "num_base_bdevs_discovered": 1, 00:14:27.213 "num_base_bdevs_operational": 2, 00:14:27.213 "base_bdevs_list": [ 00:14:27.213 { 00:14:27.213 "name": null, 00:14:27.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.213 "is_configured": false, 00:14:27.213 "data_offset": 2048, 00:14:27.213 "data_size": 63488 00:14:27.213 }, 00:14:27.213 { 00:14:27.213 "name": "pt2", 00:14:27.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.213 "is_configured": true, 00:14:27.213 "data_offset": 2048, 00:14:27.213 "data_size": 63488 00:14:27.213 }, 00:14:27.213 { 00:14:27.213 "name": null, 00:14:27.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.213 "is_configured": false, 00:14:27.213 "data_offset": 2048, 00:14:27.213 "data_size": 63488 00:14:27.213 } 00:14:27.213 ] 00:14:27.213 }' 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.213 10:42:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.779 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:27.779 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:27.779 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:27.779 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:27.779 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.779 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.779 [2024-10-30 10:42:49.022563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:27.779 [2024-10-30 10:42:49.022798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.779 [2024-10-30 10:42:49.022838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:27.779 [2024-10-30 10:42:49.022857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.779 [2024-10-30 10:42:49.023447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.779 [2024-10-30 10:42:49.023492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:27.779 [2024-10-30 10:42:49.023599] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:27.779 [2024-10-30 10:42:49.023637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:27.779 [2024-10-30 10:42:49.023798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:27.779 [2024-10-30 10:42:49.023819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:27.779 [2024-10-30 10:42:49.024149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:27.780 [2024-10-30 10:42:49.024342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:27.780 [2024-10-30 10:42:49.024357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:27.780 [2024-10-30 10:42:49.024521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.780 pt3 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.780 "name": "raid_bdev1", 00:14:27.780 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:27.780 "strip_size_kb": 0, 00:14:27.780 "state": "online", 00:14:27.780 "raid_level": "raid1", 00:14:27.780 "superblock": true, 00:14:27.780 "num_base_bdevs": 3, 00:14:27.780 "num_base_bdevs_discovered": 2, 00:14:27.780 "num_base_bdevs_operational": 2, 00:14:27.780 "base_bdevs_list": [ 00:14:27.780 { 00:14:27.780 "name": null, 00:14:27.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.780 "is_configured": false, 00:14:27.780 "data_offset": 2048, 00:14:27.780 "data_size": 63488 00:14:27.780 }, 00:14:27.780 { 00:14:27.780 "name": "pt2", 00:14:27.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:27.780 "is_configured": true, 00:14:27.780 "data_offset": 2048, 00:14:27.780 "data_size": 63488 00:14:27.780 }, 00:14:27.780 { 00:14:27.780 "name": "pt3", 00:14:27.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:27.780 "is_configured": true, 00:14:27.780 "data_offset": 2048, 00:14:27.780 "data_size": 63488 00:14:27.780 } 00:14:27.780 ] 00:14:27.780 }' 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.780 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.050 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:28.050 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.050 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.050 [2024-10-30 10:42:49.514652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.050 [2024-10-30 10:42:49.514705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.050 [2024-10-30 10:42:49.514801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.050 [2024-10-30 10:42:49.514879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.050 [2024-10-30 10:42:49.514908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:28.050 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.307 [2024-10-30 10:42:49.582717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:28.307 [2024-10-30 10:42:49.582797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.307 [2024-10-30 10:42:49.582828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:28.307 [2024-10-30 10:42:49.582842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.307 [2024-10-30 10:42:49.585806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.307 [2024-10-30 10:42:49.585842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:28.307 [2024-10-30 10:42:49.585935] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:28.307 [2024-10-30 10:42:49.585989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:28.307 [2024-10-30 10:42:49.586179] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:28.307 [2024-10-30 10:42:49.586204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.307 [2024-10-30 10:42:49.586227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:28.307 [2024-10-30 10:42:49.586294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.307 pt1 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.307 "name": "raid_bdev1", 00:14:28.307 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:28.307 "strip_size_kb": 0, 00:14:28.307 "state": "configuring", 00:14:28.307 "raid_level": "raid1", 00:14:28.307 "superblock": true, 00:14:28.307 "num_base_bdevs": 3, 00:14:28.307 "num_base_bdevs_discovered": 1, 00:14:28.307 "num_base_bdevs_operational": 2, 00:14:28.307 "base_bdevs_list": [ 00:14:28.307 { 00:14:28.307 "name": null, 00:14:28.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.307 "is_configured": false, 00:14:28.307 "data_offset": 2048, 00:14:28.307 "data_size": 63488 00:14:28.307 }, 00:14:28.307 { 00:14:28.307 "name": "pt2", 00:14:28.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.307 "is_configured": true, 00:14:28.307 "data_offset": 2048, 00:14:28.307 "data_size": 63488 00:14:28.307 }, 00:14:28.307 { 00:14:28.307 "name": null, 00:14:28.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.307 "is_configured": false, 00:14:28.307 "data_offset": 2048, 00:14:28.307 "data_size": 63488 00:14:28.307 } 00:14:28.307 ] 00:14:28.307 }' 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.307 10:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.873 [2024-10-30 10:42:50.182920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:28.873 [2024-10-30 10:42:50.183015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.873 [2024-10-30 10:42:50.183047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:28.873 [2024-10-30 10:42:50.183062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.873 [2024-10-30 10:42:50.183647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.873 [2024-10-30 10:42:50.183676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:28.873 [2024-10-30 10:42:50.183824] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:28.873 [2024-10-30 10:42:50.183882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:28.873 [2024-10-30 10:42:50.184060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:28.873 [2024-10-30 10:42:50.184077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:28.873 [2024-10-30 10:42:50.184395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:28.873 [2024-10-30 10:42:50.184601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:28.873 [2024-10-30 10:42:50.184622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:28.873 [2024-10-30 10:42:50.184825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.873 pt3 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.873 "name": "raid_bdev1", 00:14:28.873 "uuid": "ff474059-59ee-4ead-97f6-b7b8a1749357", 00:14:28.873 "strip_size_kb": 0, 00:14:28.873 "state": "online", 00:14:28.873 "raid_level": "raid1", 00:14:28.873 "superblock": true, 00:14:28.873 "num_base_bdevs": 3, 00:14:28.873 "num_base_bdevs_discovered": 2, 00:14:28.873 "num_base_bdevs_operational": 2, 00:14:28.873 "base_bdevs_list": [ 00:14:28.873 { 00:14:28.873 "name": null, 00:14:28.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.873 "is_configured": false, 00:14:28.873 "data_offset": 2048, 00:14:28.873 "data_size": 63488 00:14:28.873 }, 00:14:28.873 { 00:14:28.873 "name": "pt2", 00:14:28.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.873 "is_configured": true, 00:14:28.873 "data_offset": 2048, 00:14:28.873 "data_size": 63488 00:14:28.873 }, 00:14:28.873 { 00:14:28.873 "name": "pt3", 00:14:28.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.873 "is_configured": true, 00:14:28.873 "data_offset": 2048, 00:14:28.873 "data_size": 63488 00:14:28.873 } 00:14:28.873 ] 00:14:28.873 }' 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.873 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.439 [2024-10-30 10:42:50.735419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ff474059-59ee-4ead-97f6-b7b8a1749357 '!=' ff474059-59ee-4ead-97f6-b7b8a1749357 ']' 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68892 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68892 ']' 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68892 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68892 00:14:29.439 killing process with pid 68892 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68892' 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68892 00:14:29.439 [2024-10-30 10:42:50.806752] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.439 10:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68892 00:14:29.439 [2024-10-30 10:42:50.806876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.439 [2024-10-30 10:42:50.806965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.439 [2024-10-30 10:42:50.806998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:29.696 [2024-10-30 10:42:51.070666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.148 10:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:31.148 00:14:31.148 real 0m8.466s 00:14:31.148 user 0m13.835s 00:14:31.148 sys 0m1.150s 00:14:31.148 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:31.148 ************************************ 00:14:31.148 END TEST raid_superblock_test 00:14:31.148 ************************************ 00:14:31.148 10:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.148 10:42:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:31.148 10:42:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:31.148 10:42:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:31.148 10:42:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.148 ************************************ 00:14:31.148 START TEST raid_read_error_test 00:14:31.148 ************************************ 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FzUgIWFMFe 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69343 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69343 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69343 ']' 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:31.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:31.148 10:42:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.148 [2024-10-30 10:42:52.325891] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:14:31.148 [2024-10-30 10:42:52.326087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69343 ] 00:14:31.148 [2024-10-30 10:42:52.511944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.407 [2024-10-30 10:42:52.641495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.407 [2024-10-30 10:42:52.839993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.407 [2024-10-30 10:42:52.840104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.974 BaseBdev1_malloc 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.974 true 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.974 [2024-10-30 10:42:53.421213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:31.974 [2024-10-30 10:42:53.421309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.974 [2024-10-30 10:42:53.421338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:31.974 [2024-10-30 10:42:53.421356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.974 [2024-10-30 10:42:53.424557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.974 [2024-10-30 10:42:53.424610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:31.974 BaseBdev1 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.974 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.232 BaseBdev2_malloc 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.232 true 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.232 [2024-10-30 10:42:53.477354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:32.232 [2024-10-30 10:42:53.477432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.232 [2024-10-30 10:42:53.477456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:32.232 [2024-10-30 10:42:53.477472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.232 [2024-10-30 10:42:53.480320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.232 [2024-10-30 10:42:53.480363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:32.232 BaseBdev2 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.232 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.233 BaseBdev3_malloc 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.233 true 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.233 [2024-10-30 10:42:53.546592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:32.233 [2024-10-30 10:42:53.546683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.233 [2024-10-30 10:42:53.546726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:32.233 [2024-10-30 10:42:53.546744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.233 [2024-10-30 10:42:53.549625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.233 [2024-10-30 10:42:53.549702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:32.233 BaseBdev3 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.233 [2024-10-30 10:42:53.554694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.233 [2024-10-30 10:42:53.557235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.233 [2024-10-30 10:42:53.557359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.233 [2024-10-30 10:42:53.557678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:32.233 [2024-10-30 10:42:53.557697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:32.233 [2024-10-30 10:42:53.558027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:32.233 [2024-10-30 10:42:53.558249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:32.233 [2024-10-30 10:42:53.558269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:32.233 [2024-10-30 10:42:53.558449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.233 "name": "raid_bdev1", 00:14:32.233 "uuid": "cf44ef3f-04f9-4c70-b848-aac21c58ec14", 00:14:32.233 "strip_size_kb": 0, 00:14:32.233 "state": "online", 00:14:32.233 "raid_level": "raid1", 00:14:32.233 "superblock": true, 00:14:32.233 "num_base_bdevs": 3, 00:14:32.233 "num_base_bdevs_discovered": 3, 00:14:32.233 "num_base_bdevs_operational": 3, 00:14:32.233 "base_bdevs_list": [ 00:14:32.233 { 00:14:32.233 "name": "BaseBdev1", 00:14:32.233 "uuid": "bd093227-4d66-54ce-9e99-8a377edc8e6f", 00:14:32.233 "is_configured": true, 00:14:32.233 "data_offset": 2048, 00:14:32.233 "data_size": 63488 00:14:32.233 }, 00:14:32.233 { 00:14:32.233 "name": "BaseBdev2", 00:14:32.233 "uuid": "6c31c8fd-6970-50d2-9ed7-b87947522a7c", 00:14:32.233 "is_configured": true, 00:14:32.233 "data_offset": 2048, 00:14:32.233 "data_size": 63488 00:14:32.233 }, 00:14:32.233 { 00:14:32.233 "name": "BaseBdev3", 00:14:32.233 "uuid": "b493cf2d-932d-5ef1-8a3c-d013e1453322", 00:14:32.233 "is_configured": true, 00:14:32.233 "data_offset": 2048, 00:14:32.233 "data_size": 63488 00:14:32.233 } 00:14:32.233 ] 00:14:32.233 }' 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.233 10:42:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.800 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:32.800 10:42:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:32.800 [2024-10-30 10:42:54.188263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.735 "name": "raid_bdev1", 00:14:33.735 "uuid": "cf44ef3f-04f9-4c70-b848-aac21c58ec14", 00:14:33.735 "strip_size_kb": 0, 00:14:33.735 "state": "online", 00:14:33.735 "raid_level": "raid1", 00:14:33.735 "superblock": true, 00:14:33.735 "num_base_bdevs": 3, 00:14:33.735 "num_base_bdevs_discovered": 3, 00:14:33.735 "num_base_bdevs_operational": 3, 00:14:33.735 "base_bdevs_list": [ 00:14:33.735 { 00:14:33.735 "name": "BaseBdev1", 00:14:33.735 "uuid": "bd093227-4d66-54ce-9e99-8a377edc8e6f", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 }, 00:14:33.735 { 00:14:33.735 "name": "BaseBdev2", 00:14:33.735 "uuid": "6c31c8fd-6970-50d2-9ed7-b87947522a7c", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 }, 00:14:33.735 { 00:14:33.735 "name": "BaseBdev3", 00:14:33.735 "uuid": "b493cf2d-932d-5ef1-8a3c-d013e1453322", 00:14:33.735 "is_configured": true, 00:14:33.735 "data_offset": 2048, 00:14:33.735 "data_size": 63488 00:14:33.735 } 00:14:33.735 ] 00:14:33.735 }' 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.735 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.300 [2024-10-30 10:42:55.624076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.300 [2024-10-30 10:42:55.624125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.300 [2024-10-30 10:42:55.628365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.300 [2024-10-30 10:42:55.628684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.300 [2024-10-30 10:42:55.629056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.300 [2024-10-30 10:42:55.629261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, sta{ 00:14:34.300 "results": [ 00:14:34.300 { 00:14:34.300 "job": "raid_bdev1", 00:14:34.300 "core_mask": "0x1", 00:14:34.300 "workload": "randrw", 00:14:34.300 "percentage": 50, 00:14:34.300 "status": "finished", 00:14:34.300 "queue_depth": 1, 00:14:34.300 "io_size": 131072, 00:14:34.300 "runtime": 1.433341, 00:14:34.300 "iops": 9715.761985459147, 00:14:34.300 "mibps": 1214.4702481823933, 00:14:34.300 "io_failed": 0, 00:14:34.300 "io_timeout": 0, 00:14:34.300 "avg_latency_us": 98.75857689344978, 00:14:34.300 "min_latency_us": 38.86545454545455, 00:14:34.300 "max_latency_us": 1951.1854545454546 00:14:34.300 } 00:14:34.300 ], 00:14:34.300 "core_count": 1 00:14:34.300 } 00:14:34.300 te offline 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69343 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69343 ']' 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69343 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69343 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69343' 00:14:34.300 killing process with pid 69343 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69343 00:14:34.300 [2024-10-30 10:42:55.669994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.300 10:42:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69343 00:14:34.558 [2024-10-30 10:42:55.874385] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FzUgIWFMFe 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:35.938 ************************************ 00:14:35.938 END TEST raid_read_error_test 00:14:35.938 ************************************ 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:35.938 00:14:35.938 real 0m4.793s 00:14:35.938 user 0m5.988s 00:14:35.938 sys 0m0.586s 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:35.938 10:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.938 10:42:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:35.938 10:42:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:35.938 10:42:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:35.938 10:42:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.938 ************************************ 00:14:35.938 START TEST raid_write_error_test 00:14:35.938 ************************************ 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NZUC50dEwa 00:14:35.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69490 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69490 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69490 ']' 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:35.938 10:42:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.938 [2024-10-30 10:42:57.160796] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:14:35.938 [2024-10-30 10:42:57.161038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69490 ] 00:14:35.938 [2024-10-30 10:42:57.344089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.197 [2024-10-30 10:42:57.486640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.454 [2024-10-30 10:42:57.709976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.454 [2024-10-30 10:42:57.710044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.713 BaseBdev1_malloc 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.713 true 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.713 [2024-10-30 10:42:58.153233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:36.713 [2024-10-30 10:42:58.153300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.713 [2024-10-30 10:42:58.153329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:36.713 [2024-10-30 10:42:58.153346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.713 [2024-10-30 10:42:58.156216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.713 [2024-10-30 10:42:58.156265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.713 BaseBdev1 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.713 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.972 BaseBdev2_malloc 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.972 true 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.972 [2024-10-30 10:42:58.209751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:36.972 [2024-10-30 10:42:58.209830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.972 [2024-10-30 10:42:58.209854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:36.972 [2024-10-30 10:42:58.209870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.972 [2024-10-30 10:42:58.212659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.972 [2024-10-30 10:42:58.212721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.972 BaseBdev2 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.972 BaseBdev3_malloc 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.972 true 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.972 [2024-10-30 10:42:58.274183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:36.972 [2024-10-30 10:42:58.274263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.972 [2024-10-30 10:42:58.274295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:36.972 [2024-10-30 10:42:58.274327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.972 [2024-10-30 10:42:58.277220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.972 [2024-10-30 10:42:58.277267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:36.972 BaseBdev3 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.972 [2024-10-30 10:42:58.282364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.972 [2024-10-30 10:42:58.285147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.972 [2024-10-30 10:42:58.285253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.972 [2024-10-30 10:42:58.285541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:36.972 [2024-10-30 10:42:58.285559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:36.972 [2024-10-30 10:42:58.285882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:36.972 [2024-10-30 10:42:58.286127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:36.972 [2024-10-30 10:42:58.286148] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:36.972 [2024-10-30 10:42:58.286422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.972 "name": "raid_bdev1", 00:14:36.972 "uuid": "ba8372f9-ac2c-474f-bc9b-7966e0e3dffc", 00:14:36.972 "strip_size_kb": 0, 00:14:36.972 "state": "online", 00:14:36.972 "raid_level": "raid1", 00:14:36.972 "superblock": true, 00:14:36.972 "num_base_bdevs": 3, 00:14:36.972 "num_base_bdevs_discovered": 3, 00:14:36.972 "num_base_bdevs_operational": 3, 00:14:36.972 "base_bdevs_list": [ 00:14:36.972 { 00:14:36.972 "name": "BaseBdev1", 00:14:36.972 "uuid": "1b5b8d5f-194c-5d37-b1e2-3b1bd6ee7887", 00:14:36.972 "is_configured": true, 00:14:36.972 "data_offset": 2048, 00:14:36.972 "data_size": 63488 00:14:36.972 }, 00:14:36.972 { 00:14:36.972 "name": "BaseBdev2", 00:14:36.972 "uuid": "34170f88-8e90-53e5-839d-9ff8b2e34e81", 00:14:36.972 "is_configured": true, 00:14:36.972 "data_offset": 2048, 00:14:36.972 "data_size": 63488 00:14:36.972 }, 00:14:36.972 { 00:14:36.972 "name": "BaseBdev3", 00:14:36.972 "uuid": "c54365e4-5251-573f-b896-fee61e49cbfa", 00:14:36.972 "is_configured": true, 00:14:36.972 "data_offset": 2048, 00:14:36.972 "data_size": 63488 00:14:36.972 } 00:14:36.972 ] 00:14:36.972 }' 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.972 10:42:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.539 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:37.539 10:42:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:37.539 [2024-10-30 10:42:58.995977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.473 [2024-10-30 10:42:59.835578] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:38.473 [2024-10-30 10:42:59.835814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.473 [2024-10-30 10:42:59.836105] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.473 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.474 "name": "raid_bdev1", 00:14:38.474 "uuid": "ba8372f9-ac2c-474f-bc9b-7966e0e3dffc", 00:14:38.474 "strip_size_kb": 0, 00:14:38.474 "state": "online", 00:14:38.474 "raid_level": "raid1", 00:14:38.474 "superblock": true, 00:14:38.474 "num_base_bdevs": 3, 00:14:38.474 "num_base_bdevs_discovered": 2, 00:14:38.474 "num_base_bdevs_operational": 2, 00:14:38.474 "base_bdevs_list": [ 00:14:38.474 { 00:14:38.474 "name": null, 00:14:38.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.474 "is_configured": false, 00:14:38.474 "data_offset": 0, 00:14:38.474 "data_size": 63488 00:14:38.474 }, 00:14:38.474 { 00:14:38.474 "name": "BaseBdev2", 00:14:38.474 "uuid": "34170f88-8e90-53e5-839d-9ff8b2e34e81", 00:14:38.474 "is_configured": true, 00:14:38.474 "data_offset": 2048, 00:14:38.474 "data_size": 63488 00:14:38.474 }, 00:14:38.474 { 00:14:38.474 "name": "BaseBdev3", 00:14:38.474 "uuid": "c54365e4-5251-573f-b896-fee61e49cbfa", 00:14:38.474 "is_configured": true, 00:14:38.474 "data_offset": 2048, 00:14:38.474 "data_size": 63488 00:14:38.474 } 00:14:38.474 ] 00:14:38.474 }' 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.474 10:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.040 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:39.040 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.040 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.040 [2024-10-30 10:43:00.437572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.040 [2024-10-30 10:43:00.437780] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.040 [2024-10-30 10:43:00.441247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.040 [2024-10-30 10:43:00.441524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.040 [2024-10-30 10:43:00.441749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.040 [2024-10-30 10:43:00.441905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:39.040 { 00:14:39.040 "results": [ 00:14:39.040 { 00:14:39.040 "job": "raid_bdev1", 00:14:39.040 "core_mask": "0x1", 00:14:39.040 "workload": "randrw", 00:14:39.040 "percentage": 50, 00:14:39.040 "status": "finished", 00:14:39.040 "queue_depth": 1, 00:14:39.040 "io_size": 131072, 00:14:39.040 "runtime": 1.43903, 00:14:39.040 "iops": 10915.686260884067, 00:14:39.040 "mibps": 1364.4607826105084, 00:14:39.040 "io_failed": 0, 00:14:39.041 "io_timeout": 0, 00:14:39.041 "avg_latency_us": 87.33081278792508, 00:14:39.041 "min_latency_us": 37.93454545454546, 00:14:39.041 "max_latency_us": 1787.3454545454545 00:14:39.041 } 00:14:39.041 ], 00:14:39.041 "core_count": 1 00:14:39.041 } 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69490 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69490 ']' 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69490 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69490 00:14:39.041 killing process with pid 69490 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69490' 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69490 00:14:39.041 [2024-10-30 10:43:00.483305] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.041 10:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69490 00:14:39.299 [2024-10-30 10:43:00.723588] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NZUC50dEwa 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:40.744 ************************************ 00:14:40.744 END TEST raid_write_error_test 00:14:40.744 ************************************ 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:40.744 00:14:40.744 real 0m4.787s 00:14:40.744 user 0m6.013s 00:14:40.744 sys 0m0.587s 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.744 10:43:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.744 10:43:01 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:14:40.744 10:43:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:40.744 10:43:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:40.744 10:43:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:40.744 10:43:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:40.745 10:43:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:40.745 ************************************ 00:14:40.745 START TEST raid_state_function_test 00:14:40.745 ************************************ 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:40.745 Process raid pid: 69634 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69634 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69634' 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69634 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69634 ']' 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:40.745 10:43:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.745 [2024-10-30 10:43:02.018091] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:14:40.745 [2024-10-30 10:43:02.018424] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.745 [2024-10-30 10:43:02.210021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.003 [2024-10-30 10:43:02.367088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.262 [2024-10-30 10:43:02.584545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.262 [2024-10-30 10:43:02.584616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.521 [2024-10-30 10:43:02.972164] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:41.521 [2024-10-30 10:43:02.972230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:41.521 [2024-10-30 10:43:02.972248] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:41.521 [2024-10-30 10:43:02.972265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:41.521 [2024-10-30 10:43:02.972275] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:41.521 [2024-10-30 10:43:02.972290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:41.521 [2024-10-30 10:43:02.972300] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:41.521 [2024-10-30 10:43:02.972314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.521 10:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.780 10:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.780 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.780 "name": "Existed_Raid", 00:14:41.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.780 "strip_size_kb": 64, 00:14:41.780 "state": "configuring", 00:14:41.780 "raid_level": "raid0", 00:14:41.780 "superblock": false, 00:14:41.780 "num_base_bdevs": 4, 00:14:41.780 "num_base_bdevs_discovered": 0, 00:14:41.780 "num_base_bdevs_operational": 4, 00:14:41.780 "base_bdevs_list": [ 00:14:41.780 { 00:14:41.780 "name": "BaseBdev1", 00:14:41.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.780 "is_configured": false, 00:14:41.780 "data_offset": 0, 00:14:41.780 "data_size": 0 00:14:41.780 }, 00:14:41.780 { 00:14:41.780 "name": "BaseBdev2", 00:14:41.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.780 "is_configured": false, 00:14:41.780 "data_offset": 0, 00:14:41.780 "data_size": 0 00:14:41.780 }, 00:14:41.780 { 00:14:41.780 "name": "BaseBdev3", 00:14:41.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.780 "is_configured": false, 00:14:41.780 "data_offset": 0, 00:14:41.780 "data_size": 0 00:14:41.780 }, 00:14:41.780 { 00:14:41.780 "name": "BaseBdev4", 00:14:41.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.780 "is_configured": false, 00:14:41.780 "data_offset": 0, 00:14:41.780 "data_size": 0 00:14:41.780 } 00:14:41.780 ] 00:14:41.780 }' 00:14:41.780 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.780 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.037 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.037 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.037 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.038 [2024-10-30 10:43:03.500303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.038 [2024-10-30 10:43:03.500360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:42.038 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.038 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:42.038 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.038 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.296 [2024-10-30 10:43:03.508274] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.296 [2024-10-30 10:43:03.508332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.296 [2024-10-30 10:43:03.508349] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.296 [2024-10-30 10:43:03.508366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.296 [2024-10-30 10:43:03.508376] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.296 [2024-10-30 10:43:03.508390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.296 [2024-10-30 10:43:03.508400] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:42.296 [2024-10-30 10:43:03.508414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.296 [2024-10-30 10:43:03.553956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.296 BaseBdev1 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.296 [ 00:14:42.296 { 00:14:42.296 "name": "BaseBdev1", 00:14:42.296 "aliases": [ 00:14:42.296 "6a5d9593-8a82-4ab3-8d2b-c13fe450c14a" 00:14:42.296 ], 00:14:42.296 "product_name": "Malloc disk", 00:14:42.296 "block_size": 512, 00:14:42.296 "num_blocks": 65536, 00:14:42.296 "uuid": "6a5d9593-8a82-4ab3-8d2b-c13fe450c14a", 00:14:42.296 "assigned_rate_limits": { 00:14:42.296 "rw_ios_per_sec": 0, 00:14:42.296 "rw_mbytes_per_sec": 0, 00:14:42.296 "r_mbytes_per_sec": 0, 00:14:42.296 "w_mbytes_per_sec": 0 00:14:42.296 }, 00:14:42.296 "claimed": true, 00:14:42.296 "claim_type": "exclusive_write", 00:14:42.296 "zoned": false, 00:14:42.296 "supported_io_types": { 00:14:42.296 "read": true, 00:14:42.296 "write": true, 00:14:42.296 "unmap": true, 00:14:42.296 "flush": true, 00:14:42.296 "reset": true, 00:14:42.296 "nvme_admin": false, 00:14:42.296 "nvme_io": false, 00:14:42.296 "nvme_io_md": false, 00:14:42.296 "write_zeroes": true, 00:14:42.296 "zcopy": true, 00:14:42.296 "get_zone_info": false, 00:14:42.296 "zone_management": false, 00:14:42.296 "zone_append": false, 00:14:42.296 "compare": false, 00:14:42.296 "compare_and_write": false, 00:14:42.296 "abort": true, 00:14:42.296 "seek_hole": false, 00:14:42.296 "seek_data": false, 00:14:42.296 "copy": true, 00:14:42.296 "nvme_iov_md": false 00:14:42.296 }, 00:14:42.296 "memory_domains": [ 00:14:42.296 { 00:14:42.296 "dma_device_id": "system", 00:14:42.296 "dma_device_type": 1 00:14:42.296 }, 00:14:42.296 { 00:14:42.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.296 "dma_device_type": 2 00:14:42.296 } 00:14:42.296 ], 00:14:42.296 "driver_specific": {} 00:14:42.296 } 00:14:42.296 ] 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.296 "name": "Existed_Raid", 00:14:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.296 "strip_size_kb": 64, 00:14:42.296 "state": "configuring", 00:14:42.296 "raid_level": "raid0", 00:14:42.296 "superblock": false, 00:14:42.296 "num_base_bdevs": 4, 00:14:42.296 "num_base_bdevs_discovered": 1, 00:14:42.296 "num_base_bdevs_operational": 4, 00:14:42.296 "base_bdevs_list": [ 00:14:42.296 { 00:14:42.296 "name": "BaseBdev1", 00:14:42.296 "uuid": "6a5d9593-8a82-4ab3-8d2b-c13fe450c14a", 00:14:42.296 "is_configured": true, 00:14:42.296 "data_offset": 0, 00:14:42.296 "data_size": 65536 00:14:42.296 }, 00:14:42.296 { 00:14:42.296 "name": "BaseBdev2", 00:14:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.296 "is_configured": false, 00:14:42.296 "data_offset": 0, 00:14:42.296 "data_size": 0 00:14:42.296 }, 00:14:42.296 { 00:14:42.296 "name": "BaseBdev3", 00:14:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.296 "is_configured": false, 00:14:42.296 "data_offset": 0, 00:14:42.296 "data_size": 0 00:14:42.296 }, 00:14:42.296 { 00:14:42.296 "name": "BaseBdev4", 00:14:42.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.296 "is_configured": false, 00:14:42.296 "data_offset": 0, 00:14:42.296 "data_size": 0 00:14:42.296 } 00:14:42.296 ] 00:14:42.296 }' 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.296 10:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.862 [2024-10-30 10:43:04.078103] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.862 [2024-10-30 10:43:04.078165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.862 [2024-10-30 10:43:04.086178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.862 [2024-10-30 10:43:04.088657] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.862 [2024-10-30 10:43:04.088728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.862 [2024-10-30 10:43:04.088745] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:42.862 [2024-10-30 10:43:04.088763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:42.862 [2024-10-30 10:43:04.088773] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:42.862 [2024-10-30 10:43:04.088787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.862 "name": "Existed_Raid", 00:14:42.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.862 "strip_size_kb": 64, 00:14:42.862 "state": "configuring", 00:14:42.862 "raid_level": "raid0", 00:14:42.862 "superblock": false, 00:14:42.862 "num_base_bdevs": 4, 00:14:42.862 "num_base_bdevs_discovered": 1, 00:14:42.862 "num_base_bdevs_operational": 4, 00:14:42.862 "base_bdevs_list": [ 00:14:42.862 { 00:14:42.862 "name": "BaseBdev1", 00:14:42.862 "uuid": "6a5d9593-8a82-4ab3-8d2b-c13fe450c14a", 00:14:42.862 "is_configured": true, 00:14:42.862 "data_offset": 0, 00:14:42.862 "data_size": 65536 00:14:42.862 }, 00:14:42.862 { 00:14:42.862 "name": "BaseBdev2", 00:14:42.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.862 "is_configured": false, 00:14:42.862 "data_offset": 0, 00:14:42.862 "data_size": 0 00:14:42.862 }, 00:14:42.862 { 00:14:42.862 "name": "BaseBdev3", 00:14:42.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.862 "is_configured": false, 00:14:42.862 "data_offset": 0, 00:14:42.862 "data_size": 0 00:14:42.862 }, 00:14:42.862 { 00:14:42.862 "name": "BaseBdev4", 00:14:42.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.862 "is_configured": false, 00:14:42.862 "data_offset": 0, 00:14:42.862 "data_size": 0 00:14:42.862 } 00:14:42.862 ] 00:14:42.862 }' 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.862 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.120 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:43.120 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.120 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.378 [2024-10-30 10:43:04.625025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.378 BaseBdev2 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.378 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.378 [ 00:14:43.378 { 00:14:43.378 "name": "BaseBdev2", 00:14:43.378 "aliases": [ 00:14:43.378 "3cd857d0-44d4-41eb-8dc5-8a3209eeba92" 00:14:43.378 ], 00:14:43.378 "product_name": "Malloc disk", 00:14:43.378 "block_size": 512, 00:14:43.378 "num_blocks": 65536, 00:14:43.378 "uuid": "3cd857d0-44d4-41eb-8dc5-8a3209eeba92", 00:14:43.378 "assigned_rate_limits": { 00:14:43.378 "rw_ios_per_sec": 0, 00:14:43.379 "rw_mbytes_per_sec": 0, 00:14:43.379 "r_mbytes_per_sec": 0, 00:14:43.379 "w_mbytes_per_sec": 0 00:14:43.379 }, 00:14:43.379 "claimed": true, 00:14:43.379 "claim_type": "exclusive_write", 00:14:43.379 "zoned": false, 00:14:43.379 "supported_io_types": { 00:14:43.379 "read": true, 00:14:43.379 "write": true, 00:14:43.379 "unmap": true, 00:14:43.379 "flush": true, 00:14:43.379 "reset": true, 00:14:43.379 "nvme_admin": false, 00:14:43.379 "nvme_io": false, 00:14:43.379 "nvme_io_md": false, 00:14:43.379 "write_zeroes": true, 00:14:43.379 "zcopy": true, 00:14:43.379 "get_zone_info": false, 00:14:43.379 "zone_management": false, 00:14:43.379 "zone_append": false, 00:14:43.379 "compare": false, 00:14:43.379 "compare_and_write": false, 00:14:43.379 "abort": true, 00:14:43.379 "seek_hole": false, 00:14:43.379 "seek_data": false, 00:14:43.379 "copy": true, 00:14:43.379 "nvme_iov_md": false 00:14:43.379 }, 00:14:43.379 "memory_domains": [ 00:14:43.379 { 00:14:43.379 "dma_device_id": "system", 00:14:43.379 "dma_device_type": 1 00:14:43.379 }, 00:14:43.379 { 00:14:43.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.379 "dma_device_type": 2 00:14:43.379 } 00:14:43.379 ], 00:14:43.379 "driver_specific": {} 00:14:43.379 } 00:14:43.379 ] 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.379 "name": "Existed_Raid", 00:14:43.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.379 "strip_size_kb": 64, 00:14:43.379 "state": "configuring", 00:14:43.379 "raid_level": "raid0", 00:14:43.379 "superblock": false, 00:14:43.379 "num_base_bdevs": 4, 00:14:43.379 "num_base_bdevs_discovered": 2, 00:14:43.379 "num_base_bdevs_operational": 4, 00:14:43.379 "base_bdevs_list": [ 00:14:43.379 { 00:14:43.379 "name": "BaseBdev1", 00:14:43.379 "uuid": "6a5d9593-8a82-4ab3-8d2b-c13fe450c14a", 00:14:43.379 "is_configured": true, 00:14:43.379 "data_offset": 0, 00:14:43.379 "data_size": 65536 00:14:43.379 }, 00:14:43.379 { 00:14:43.379 "name": "BaseBdev2", 00:14:43.379 "uuid": "3cd857d0-44d4-41eb-8dc5-8a3209eeba92", 00:14:43.379 "is_configured": true, 00:14:43.379 "data_offset": 0, 00:14:43.379 "data_size": 65536 00:14:43.379 }, 00:14:43.379 { 00:14:43.379 "name": "BaseBdev3", 00:14:43.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.379 "is_configured": false, 00:14:43.379 "data_offset": 0, 00:14:43.379 "data_size": 0 00:14:43.379 }, 00:14:43.379 { 00:14:43.379 "name": "BaseBdev4", 00:14:43.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.379 "is_configured": false, 00:14:43.379 "data_offset": 0, 00:14:43.379 "data_size": 0 00:14:43.379 } 00:14:43.379 ] 00:14:43.379 }' 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.379 10:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.978 [2024-10-30 10:43:05.228963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:43.978 BaseBdev3 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.978 [ 00:14:43.978 { 00:14:43.978 "name": "BaseBdev3", 00:14:43.978 "aliases": [ 00:14:43.978 "de0dfb83-6078-464b-b166-f35351eb0bf3" 00:14:43.978 ], 00:14:43.978 "product_name": "Malloc disk", 00:14:43.978 "block_size": 512, 00:14:43.978 "num_blocks": 65536, 00:14:43.978 "uuid": "de0dfb83-6078-464b-b166-f35351eb0bf3", 00:14:43.978 "assigned_rate_limits": { 00:14:43.978 "rw_ios_per_sec": 0, 00:14:43.978 "rw_mbytes_per_sec": 0, 00:14:43.978 "r_mbytes_per_sec": 0, 00:14:43.978 "w_mbytes_per_sec": 0 00:14:43.978 }, 00:14:43.978 "claimed": true, 00:14:43.978 "claim_type": "exclusive_write", 00:14:43.978 "zoned": false, 00:14:43.978 "supported_io_types": { 00:14:43.978 "read": true, 00:14:43.978 "write": true, 00:14:43.978 "unmap": true, 00:14:43.978 "flush": true, 00:14:43.978 "reset": true, 00:14:43.978 "nvme_admin": false, 00:14:43.978 "nvme_io": false, 00:14:43.978 "nvme_io_md": false, 00:14:43.978 "write_zeroes": true, 00:14:43.978 "zcopy": true, 00:14:43.978 "get_zone_info": false, 00:14:43.978 "zone_management": false, 00:14:43.978 "zone_append": false, 00:14:43.978 "compare": false, 00:14:43.978 "compare_and_write": false, 00:14:43.978 "abort": true, 00:14:43.978 "seek_hole": false, 00:14:43.978 "seek_data": false, 00:14:43.978 "copy": true, 00:14:43.978 "nvme_iov_md": false 00:14:43.978 }, 00:14:43.978 "memory_domains": [ 00:14:43.978 { 00:14:43.978 "dma_device_id": "system", 00:14:43.978 "dma_device_type": 1 00:14:43.978 }, 00:14:43.978 { 00:14:43.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.978 "dma_device_type": 2 00:14:43.978 } 00:14:43.978 ], 00:14:43.978 "driver_specific": {} 00:14:43.978 } 00:14:43.978 ] 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.978 "name": "Existed_Raid", 00:14:43.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.978 "strip_size_kb": 64, 00:14:43.978 "state": "configuring", 00:14:43.978 "raid_level": "raid0", 00:14:43.978 "superblock": false, 00:14:43.978 "num_base_bdevs": 4, 00:14:43.978 "num_base_bdevs_discovered": 3, 00:14:43.978 "num_base_bdevs_operational": 4, 00:14:43.978 "base_bdevs_list": [ 00:14:43.978 { 00:14:43.978 "name": "BaseBdev1", 00:14:43.978 "uuid": "6a5d9593-8a82-4ab3-8d2b-c13fe450c14a", 00:14:43.978 "is_configured": true, 00:14:43.978 "data_offset": 0, 00:14:43.978 "data_size": 65536 00:14:43.978 }, 00:14:43.978 { 00:14:43.978 "name": "BaseBdev2", 00:14:43.978 "uuid": "3cd857d0-44d4-41eb-8dc5-8a3209eeba92", 00:14:43.978 "is_configured": true, 00:14:43.978 "data_offset": 0, 00:14:43.978 "data_size": 65536 00:14:43.978 }, 00:14:43.978 { 00:14:43.978 "name": "BaseBdev3", 00:14:43.978 "uuid": "de0dfb83-6078-464b-b166-f35351eb0bf3", 00:14:43.978 "is_configured": true, 00:14:43.978 "data_offset": 0, 00:14:43.978 "data_size": 65536 00:14:43.978 }, 00:14:43.978 { 00:14:43.978 "name": "BaseBdev4", 00:14:43.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.978 "is_configured": false, 00:14:43.978 "data_offset": 0, 00:14:43.978 "data_size": 0 00:14:43.978 } 00:14:43.978 ] 00:14:43.978 }' 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.978 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.544 [2024-10-30 10:43:05.833008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.544 [2024-10-30 10:43:05.833068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:44.544 [2024-10-30 10:43:05.833151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:44.544 [2024-10-30 10:43:05.833563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:44.544 [2024-10-30 10:43:05.833808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:44.544 [2024-10-30 10:43:05.833830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:44.544 [2024-10-30 10:43:05.834182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.544 BaseBdev4 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.544 [ 00:14:44.544 { 00:14:44.544 "name": "BaseBdev4", 00:14:44.544 "aliases": [ 00:14:44.544 "a9ade746-b85a-4b9d-b5a6-8e5d67768814" 00:14:44.544 ], 00:14:44.544 "product_name": "Malloc disk", 00:14:44.544 "block_size": 512, 00:14:44.544 "num_blocks": 65536, 00:14:44.544 "uuid": "a9ade746-b85a-4b9d-b5a6-8e5d67768814", 00:14:44.544 "assigned_rate_limits": { 00:14:44.544 "rw_ios_per_sec": 0, 00:14:44.544 "rw_mbytes_per_sec": 0, 00:14:44.544 "r_mbytes_per_sec": 0, 00:14:44.544 "w_mbytes_per_sec": 0 00:14:44.544 }, 00:14:44.544 "claimed": true, 00:14:44.544 "claim_type": "exclusive_write", 00:14:44.544 "zoned": false, 00:14:44.544 "supported_io_types": { 00:14:44.544 "read": true, 00:14:44.544 "write": true, 00:14:44.544 "unmap": true, 00:14:44.544 "flush": true, 00:14:44.544 "reset": true, 00:14:44.544 "nvme_admin": false, 00:14:44.544 "nvme_io": false, 00:14:44.544 "nvme_io_md": false, 00:14:44.544 "write_zeroes": true, 00:14:44.544 "zcopy": true, 00:14:44.544 "get_zone_info": false, 00:14:44.544 "zone_management": false, 00:14:44.544 "zone_append": false, 00:14:44.544 "compare": false, 00:14:44.544 "compare_and_write": false, 00:14:44.544 "abort": true, 00:14:44.544 "seek_hole": false, 00:14:44.544 "seek_data": false, 00:14:44.544 "copy": true, 00:14:44.544 "nvme_iov_md": false 00:14:44.544 }, 00:14:44.544 "memory_domains": [ 00:14:44.544 { 00:14:44.544 "dma_device_id": "system", 00:14:44.544 "dma_device_type": 1 00:14:44.544 }, 00:14:44.544 { 00:14:44.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.544 "dma_device_type": 2 00:14:44.544 } 00:14:44.544 ], 00:14:44.544 "driver_specific": {} 00:14:44.544 } 00:14:44.544 ] 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.544 "name": "Existed_Raid", 00:14:44.544 "uuid": "1bfd5169-4bb9-4b31-bd8e-57d105cc603a", 00:14:44.544 "strip_size_kb": 64, 00:14:44.544 "state": "online", 00:14:44.544 "raid_level": "raid0", 00:14:44.544 "superblock": false, 00:14:44.544 "num_base_bdevs": 4, 00:14:44.544 "num_base_bdevs_discovered": 4, 00:14:44.544 "num_base_bdevs_operational": 4, 00:14:44.544 "base_bdevs_list": [ 00:14:44.544 { 00:14:44.544 "name": "BaseBdev1", 00:14:44.544 "uuid": "6a5d9593-8a82-4ab3-8d2b-c13fe450c14a", 00:14:44.544 "is_configured": true, 00:14:44.544 "data_offset": 0, 00:14:44.544 "data_size": 65536 00:14:44.544 }, 00:14:44.544 { 00:14:44.544 "name": "BaseBdev2", 00:14:44.544 "uuid": "3cd857d0-44d4-41eb-8dc5-8a3209eeba92", 00:14:44.544 "is_configured": true, 00:14:44.544 "data_offset": 0, 00:14:44.544 "data_size": 65536 00:14:44.544 }, 00:14:44.544 { 00:14:44.544 "name": "BaseBdev3", 00:14:44.544 "uuid": "de0dfb83-6078-464b-b166-f35351eb0bf3", 00:14:44.544 "is_configured": true, 00:14:44.544 "data_offset": 0, 00:14:44.544 "data_size": 65536 00:14:44.544 }, 00:14:44.544 { 00:14:44.544 "name": "BaseBdev4", 00:14:44.544 "uuid": "a9ade746-b85a-4b9d-b5a6-8e5d67768814", 00:14:44.544 "is_configured": true, 00:14:44.544 "data_offset": 0, 00:14:44.544 "data_size": 65536 00:14:44.544 } 00:14:44.544 ] 00:14:44.544 }' 00:14:44.544 10:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.545 10:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:45.172 [2024-10-30 10:43:06.381732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.172 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:45.172 "name": "Existed_Raid", 00:14:45.172 "aliases": [ 00:14:45.172 "1bfd5169-4bb9-4b31-bd8e-57d105cc603a" 00:14:45.172 ], 00:14:45.172 "product_name": "Raid Volume", 00:14:45.172 "block_size": 512, 00:14:45.172 "num_blocks": 262144, 00:14:45.172 "uuid": "1bfd5169-4bb9-4b31-bd8e-57d105cc603a", 00:14:45.172 "assigned_rate_limits": { 00:14:45.172 "rw_ios_per_sec": 0, 00:14:45.172 "rw_mbytes_per_sec": 0, 00:14:45.172 "r_mbytes_per_sec": 0, 00:14:45.172 "w_mbytes_per_sec": 0 00:14:45.172 }, 00:14:45.172 "claimed": false, 00:14:45.172 "zoned": false, 00:14:45.172 "supported_io_types": { 00:14:45.172 "read": true, 00:14:45.172 "write": true, 00:14:45.172 "unmap": true, 00:14:45.172 "flush": true, 00:14:45.172 "reset": true, 00:14:45.172 "nvme_admin": false, 00:14:45.172 "nvme_io": false, 00:14:45.172 "nvme_io_md": false, 00:14:45.172 "write_zeroes": true, 00:14:45.172 "zcopy": false, 00:14:45.172 "get_zone_info": false, 00:14:45.172 "zone_management": false, 00:14:45.172 "zone_append": false, 00:14:45.172 "compare": false, 00:14:45.172 "compare_and_write": false, 00:14:45.172 "abort": false, 00:14:45.172 "seek_hole": false, 00:14:45.172 "seek_data": false, 00:14:45.172 "copy": false, 00:14:45.172 "nvme_iov_md": false 00:14:45.172 }, 00:14:45.172 "memory_domains": [ 00:14:45.172 { 00:14:45.172 "dma_device_id": "system", 00:14:45.172 "dma_device_type": 1 00:14:45.172 }, 00:14:45.172 { 00:14:45.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.173 "dma_device_type": 2 00:14:45.173 }, 00:14:45.173 { 00:14:45.173 "dma_device_id": "system", 00:14:45.173 "dma_device_type": 1 00:14:45.173 }, 00:14:45.173 { 00:14:45.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.173 "dma_device_type": 2 00:14:45.173 }, 00:14:45.173 { 00:14:45.173 "dma_device_id": "system", 00:14:45.173 "dma_device_type": 1 00:14:45.173 }, 00:14:45.173 { 00:14:45.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.173 "dma_device_type": 2 00:14:45.173 }, 00:14:45.173 { 00:14:45.173 "dma_device_id": "system", 00:14:45.173 "dma_device_type": 1 00:14:45.173 }, 00:14:45.173 { 00:14:45.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.173 "dma_device_type": 2 00:14:45.173 } 00:14:45.173 ], 00:14:45.173 "driver_specific": { 00:14:45.173 "raid": { 00:14:45.173 "uuid": "1bfd5169-4bb9-4b31-bd8e-57d105cc603a", 00:14:45.173 "strip_size_kb": 64, 00:14:45.173 "state": "online", 00:14:45.173 "raid_level": "raid0", 00:14:45.173 "superblock": false, 00:14:45.173 "num_base_bdevs": 4, 00:14:45.173 "num_base_bdevs_discovered": 4, 00:14:45.173 "num_base_bdevs_operational": 4, 00:14:45.173 "base_bdevs_list": [ 00:14:45.173 { 00:14:45.173 "name": "BaseBdev1", 00:14:45.173 "uuid": "6a5d9593-8a82-4ab3-8d2b-c13fe450c14a", 00:14:45.173 "is_configured": true, 00:14:45.173 "data_offset": 0, 00:14:45.173 "data_size": 65536 00:14:45.173 }, 00:14:45.173 { 00:14:45.173 "name": "BaseBdev2", 00:14:45.173 "uuid": "3cd857d0-44d4-41eb-8dc5-8a3209eeba92", 00:14:45.173 "is_configured": true, 00:14:45.173 "data_offset": 0, 00:14:45.173 "data_size": 65536 00:14:45.173 }, 00:14:45.173 { 00:14:45.173 "name": "BaseBdev3", 00:14:45.173 "uuid": "de0dfb83-6078-464b-b166-f35351eb0bf3", 00:14:45.173 "is_configured": true, 00:14:45.173 "data_offset": 0, 00:14:45.173 "data_size": 65536 00:14:45.173 }, 00:14:45.173 { 00:14:45.173 "name": "BaseBdev4", 00:14:45.173 "uuid": "a9ade746-b85a-4b9d-b5a6-8e5d67768814", 00:14:45.173 "is_configured": true, 00:14:45.173 "data_offset": 0, 00:14:45.173 "data_size": 65536 00:14:45.173 } 00:14:45.173 ] 00:14:45.173 } 00:14:45.173 } 00:14:45.173 }' 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:45.173 BaseBdev2 00:14:45.173 BaseBdev3 00:14:45.173 BaseBdev4' 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.173 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.444 [2024-10-30 10:43:06.733521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.444 [2024-10-30 10:43:06.733560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.444 [2024-10-30 10:43:06.733621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.444 "name": "Existed_Raid", 00:14:45.444 "uuid": "1bfd5169-4bb9-4b31-bd8e-57d105cc603a", 00:14:45.444 "strip_size_kb": 64, 00:14:45.444 "state": "offline", 00:14:45.444 "raid_level": "raid0", 00:14:45.444 "superblock": false, 00:14:45.444 "num_base_bdevs": 4, 00:14:45.444 "num_base_bdevs_discovered": 3, 00:14:45.444 "num_base_bdevs_operational": 3, 00:14:45.444 "base_bdevs_list": [ 00:14:45.444 { 00:14:45.444 "name": null, 00:14:45.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.444 "is_configured": false, 00:14:45.444 "data_offset": 0, 00:14:45.444 "data_size": 65536 00:14:45.444 }, 00:14:45.444 { 00:14:45.444 "name": "BaseBdev2", 00:14:45.444 "uuid": "3cd857d0-44d4-41eb-8dc5-8a3209eeba92", 00:14:45.444 "is_configured": true, 00:14:45.444 "data_offset": 0, 00:14:45.444 "data_size": 65536 00:14:45.444 }, 00:14:45.444 { 00:14:45.444 "name": "BaseBdev3", 00:14:45.444 "uuid": "de0dfb83-6078-464b-b166-f35351eb0bf3", 00:14:45.444 "is_configured": true, 00:14:45.444 "data_offset": 0, 00:14:45.444 "data_size": 65536 00:14:45.444 }, 00:14:45.444 { 00:14:45.444 "name": "BaseBdev4", 00:14:45.444 "uuid": "a9ade746-b85a-4b9d-b5a6-8e5d67768814", 00:14:45.444 "is_configured": true, 00:14:45.444 "data_offset": 0, 00:14:45.444 "data_size": 65536 00:14:45.444 } 00:14:45.444 ] 00:14:45.444 }' 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.444 10:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.012 [2024-10-30 10:43:07.396084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:46.012 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.271 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.271 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:46.271 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.271 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.272 [2024-10-30 10:43:07.533968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.272 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.272 [2024-10-30 10:43:07.678319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:46.272 [2024-10-30 10:43:07.678395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.530 BaseBdev2 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.530 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.530 [ 00:14:46.530 { 00:14:46.530 "name": "BaseBdev2", 00:14:46.530 "aliases": [ 00:14:46.530 "6274db37-889b-4e1f-8135-dd55371313ee" 00:14:46.530 ], 00:14:46.530 "product_name": "Malloc disk", 00:14:46.530 "block_size": 512, 00:14:46.530 "num_blocks": 65536, 00:14:46.530 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:46.530 "assigned_rate_limits": { 00:14:46.530 "rw_ios_per_sec": 0, 00:14:46.530 "rw_mbytes_per_sec": 0, 00:14:46.530 "r_mbytes_per_sec": 0, 00:14:46.530 "w_mbytes_per_sec": 0 00:14:46.530 }, 00:14:46.530 "claimed": false, 00:14:46.530 "zoned": false, 00:14:46.530 "supported_io_types": { 00:14:46.530 "read": true, 00:14:46.530 "write": true, 00:14:46.530 "unmap": true, 00:14:46.530 "flush": true, 00:14:46.530 "reset": true, 00:14:46.530 "nvme_admin": false, 00:14:46.530 "nvme_io": false, 00:14:46.530 "nvme_io_md": false, 00:14:46.530 "write_zeroes": true, 00:14:46.530 "zcopy": true, 00:14:46.530 "get_zone_info": false, 00:14:46.530 "zone_management": false, 00:14:46.530 "zone_append": false, 00:14:46.531 "compare": false, 00:14:46.531 "compare_and_write": false, 00:14:46.531 "abort": true, 00:14:46.531 "seek_hole": false, 00:14:46.531 "seek_data": false, 00:14:46.531 "copy": true, 00:14:46.531 "nvme_iov_md": false 00:14:46.531 }, 00:14:46.531 "memory_domains": [ 00:14:46.531 { 00:14:46.531 "dma_device_id": "system", 00:14:46.531 "dma_device_type": 1 00:14:46.531 }, 00:14:46.531 { 00:14:46.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.531 "dma_device_type": 2 00:14:46.531 } 00:14:46.531 ], 00:14:46.531 "driver_specific": {} 00:14:46.531 } 00:14:46.531 ] 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.531 BaseBdev3 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.531 [ 00:14:46.531 { 00:14:46.531 "name": "BaseBdev3", 00:14:46.531 "aliases": [ 00:14:46.531 "bd80cc21-c5ef-4627-85fb-a594a1015f37" 00:14:46.531 ], 00:14:46.531 "product_name": "Malloc disk", 00:14:46.531 "block_size": 512, 00:14:46.531 "num_blocks": 65536, 00:14:46.531 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:46.531 "assigned_rate_limits": { 00:14:46.531 "rw_ios_per_sec": 0, 00:14:46.531 "rw_mbytes_per_sec": 0, 00:14:46.531 "r_mbytes_per_sec": 0, 00:14:46.531 "w_mbytes_per_sec": 0 00:14:46.531 }, 00:14:46.531 "claimed": false, 00:14:46.531 "zoned": false, 00:14:46.531 "supported_io_types": { 00:14:46.531 "read": true, 00:14:46.531 "write": true, 00:14:46.531 "unmap": true, 00:14:46.531 "flush": true, 00:14:46.531 "reset": true, 00:14:46.531 "nvme_admin": false, 00:14:46.531 "nvme_io": false, 00:14:46.531 "nvme_io_md": false, 00:14:46.531 "write_zeroes": true, 00:14:46.531 "zcopy": true, 00:14:46.531 "get_zone_info": false, 00:14:46.531 "zone_management": false, 00:14:46.531 "zone_append": false, 00:14:46.531 "compare": false, 00:14:46.531 "compare_and_write": false, 00:14:46.531 "abort": true, 00:14:46.531 "seek_hole": false, 00:14:46.531 "seek_data": false, 00:14:46.531 "copy": true, 00:14:46.531 "nvme_iov_md": false 00:14:46.531 }, 00:14:46.531 "memory_domains": [ 00:14:46.531 { 00:14:46.531 "dma_device_id": "system", 00:14:46.531 "dma_device_type": 1 00:14:46.531 }, 00:14:46.531 { 00:14:46.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.531 "dma_device_type": 2 00:14:46.531 } 00:14:46.531 ], 00:14:46.531 "driver_specific": {} 00:14:46.531 } 00:14:46.531 ] 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.531 10:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 BaseBdev4 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 [ 00:14:46.790 { 00:14:46.790 "name": "BaseBdev4", 00:14:46.790 "aliases": [ 00:14:46.790 "36e3b5a5-ed75-4273-acc1-d8bb328780ed" 00:14:46.790 ], 00:14:46.790 "product_name": "Malloc disk", 00:14:46.790 "block_size": 512, 00:14:46.790 "num_blocks": 65536, 00:14:46.790 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:46.790 "assigned_rate_limits": { 00:14:46.790 "rw_ios_per_sec": 0, 00:14:46.790 "rw_mbytes_per_sec": 0, 00:14:46.790 "r_mbytes_per_sec": 0, 00:14:46.790 "w_mbytes_per_sec": 0 00:14:46.790 }, 00:14:46.790 "claimed": false, 00:14:46.790 "zoned": false, 00:14:46.790 "supported_io_types": { 00:14:46.790 "read": true, 00:14:46.790 "write": true, 00:14:46.790 "unmap": true, 00:14:46.790 "flush": true, 00:14:46.790 "reset": true, 00:14:46.790 "nvme_admin": false, 00:14:46.790 "nvme_io": false, 00:14:46.790 "nvme_io_md": false, 00:14:46.790 "write_zeroes": true, 00:14:46.790 "zcopy": true, 00:14:46.790 "get_zone_info": false, 00:14:46.790 "zone_management": false, 00:14:46.790 "zone_append": false, 00:14:46.790 "compare": false, 00:14:46.790 "compare_and_write": false, 00:14:46.790 "abort": true, 00:14:46.790 "seek_hole": false, 00:14:46.790 "seek_data": false, 00:14:46.790 "copy": true, 00:14:46.790 "nvme_iov_md": false 00:14:46.790 }, 00:14:46.790 "memory_domains": [ 00:14:46.790 { 00:14:46.790 "dma_device_id": "system", 00:14:46.790 "dma_device_type": 1 00:14:46.790 }, 00:14:46.790 { 00:14:46.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.790 "dma_device_type": 2 00:14:46.790 } 00:14:46.790 ], 00:14:46.790 "driver_specific": {} 00:14:46.790 } 00:14:46.790 ] 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.790 [2024-10-30 10:43:08.052496] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.790 [2024-10-30 10:43:08.052719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.790 [2024-10-30 10:43:08.052786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.790 [2024-10-30 10:43:08.055486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.790 [2024-10-30 10:43:08.055695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.790 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.791 "name": "Existed_Raid", 00:14:46.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.791 "strip_size_kb": 64, 00:14:46.791 "state": "configuring", 00:14:46.791 "raid_level": "raid0", 00:14:46.791 "superblock": false, 00:14:46.791 "num_base_bdevs": 4, 00:14:46.791 "num_base_bdevs_discovered": 3, 00:14:46.791 "num_base_bdevs_operational": 4, 00:14:46.791 "base_bdevs_list": [ 00:14:46.791 { 00:14:46.791 "name": "BaseBdev1", 00:14:46.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.791 "is_configured": false, 00:14:46.791 "data_offset": 0, 00:14:46.791 "data_size": 0 00:14:46.791 }, 00:14:46.791 { 00:14:46.791 "name": "BaseBdev2", 00:14:46.791 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:46.791 "is_configured": true, 00:14:46.791 "data_offset": 0, 00:14:46.791 "data_size": 65536 00:14:46.791 }, 00:14:46.791 { 00:14:46.791 "name": "BaseBdev3", 00:14:46.791 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:46.791 "is_configured": true, 00:14:46.791 "data_offset": 0, 00:14:46.791 "data_size": 65536 00:14:46.791 }, 00:14:46.791 { 00:14:46.791 "name": "BaseBdev4", 00:14:46.791 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:46.791 "is_configured": true, 00:14:46.791 "data_offset": 0, 00:14:46.791 "data_size": 65536 00:14:46.791 } 00:14:46.791 ] 00:14:46.791 }' 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.791 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.360 [2024-10-30 10:43:08.592654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.360 "name": "Existed_Raid", 00:14:47.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.360 "strip_size_kb": 64, 00:14:47.360 "state": "configuring", 00:14:47.360 "raid_level": "raid0", 00:14:47.360 "superblock": false, 00:14:47.360 "num_base_bdevs": 4, 00:14:47.360 "num_base_bdevs_discovered": 2, 00:14:47.360 "num_base_bdevs_operational": 4, 00:14:47.360 "base_bdevs_list": [ 00:14:47.360 { 00:14:47.360 "name": "BaseBdev1", 00:14:47.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.360 "is_configured": false, 00:14:47.360 "data_offset": 0, 00:14:47.360 "data_size": 0 00:14:47.360 }, 00:14:47.360 { 00:14:47.360 "name": null, 00:14:47.360 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:47.360 "is_configured": false, 00:14:47.360 "data_offset": 0, 00:14:47.360 "data_size": 65536 00:14:47.360 }, 00:14:47.360 { 00:14:47.360 "name": "BaseBdev3", 00:14:47.360 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:47.360 "is_configured": true, 00:14:47.360 "data_offset": 0, 00:14:47.360 "data_size": 65536 00:14:47.360 }, 00:14:47.360 { 00:14:47.360 "name": "BaseBdev4", 00:14:47.360 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:47.360 "is_configured": true, 00:14:47.360 "data_offset": 0, 00:14:47.360 "data_size": 65536 00:14:47.360 } 00:14:47.360 ] 00:14:47.360 }' 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.360 10:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.928 [2024-10-30 10:43:09.199831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.928 BaseBdev1 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.928 [ 00:14:47.928 { 00:14:47.928 "name": "BaseBdev1", 00:14:47.928 "aliases": [ 00:14:47.928 "96e39cdc-5c60-4dfa-afe5-1ec125941de9" 00:14:47.928 ], 00:14:47.928 "product_name": "Malloc disk", 00:14:47.928 "block_size": 512, 00:14:47.928 "num_blocks": 65536, 00:14:47.928 "uuid": "96e39cdc-5c60-4dfa-afe5-1ec125941de9", 00:14:47.928 "assigned_rate_limits": { 00:14:47.928 "rw_ios_per_sec": 0, 00:14:47.928 "rw_mbytes_per_sec": 0, 00:14:47.928 "r_mbytes_per_sec": 0, 00:14:47.928 "w_mbytes_per_sec": 0 00:14:47.928 }, 00:14:47.928 "claimed": true, 00:14:47.928 "claim_type": "exclusive_write", 00:14:47.928 "zoned": false, 00:14:47.928 "supported_io_types": { 00:14:47.928 "read": true, 00:14:47.928 "write": true, 00:14:47.928 "unmap": true, 00:14:47.928 "flush": true, 00:14:47.928 "reset": true, 00:14:47.928 "nvme_admin": false, 00:14:47.928 "nvme_io": false, 00:14:47.928 "nvme_io_md": false, 00:14:47.928 "write_zeroes": true, 00:14:47.928 "zcopy": true, 00:14:47.928 "get_zone_info": false, 00:14:47.928 "zone_management": false, 00:14:47.928 "zone_append": false, 00:14:47.928 "compare": false, 00:14:47.928 "compare_and_write": false, 00:14:47.928 "abort": true, 00:14:47.928 "seek_hole": false, 00:14:47.928 "seek_data": false, 00:14:47.928 "copy": true, 00:14:47.928 "nvme_iov_md": false 00:14:47.928 }, 00:14:47.928 "memory_domains": [ 00:14:47.928 { 00:14:47.928 "dma_device_id": "system", 00:14:47.928 "dma_device_type": 1 00:14:47.928 }, 00:14:47.928 { 00:14:47.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.928 "dma_device_type": 2 00:14:47.928 } 00:14:47.928 ], 00:14:47.928 "driver_specific": {} 00:14:47.928 } 00:14:47.928 ] 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.928 "name": "Existed_Raid", 00:14:47.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.928 "strip_size_kb": 64, 00:14:47.928 "state": "configuring", 00:14:47.928 "raid_level": "raid0", 00:14:47.928 "superblock": false, 00:14:47.928 "num_base_bdevs": 4, 00:14:47.928 "num_base_bdevs_discovered": 3, 00:14:47.928 "num_base_bdevs_operational": 4, 00:14:47.928 "base_bdevs_list": [ 00:14:47.928 { 00:14:47.928 "name": "BaseBdev1", 00:14:47.928 "uuid": "96e39cdc-5c60-4dfa-afe5-1ec125941de9", 00:14:47.928 "is_configured": true, 00:14:47.928 "data_offset": 0, 00:14:47.928 "data_size": 65536 00:14:47.928 }, 00:14:47.928 { 00:14:47.928 "name": null, 00:14:47.928 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:47.928 "is_configured": false, 00:14:47.928 "data_offset": 0, 00:14:47.928 "data_size": 65536 00:14:47.928 }, 00:14:47.928 { 00:14:47.928 "name": "BaseBdev3", 00:14:47.928 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:47.928 "is_configured": true, 00:14:47.928 "data_offset": 0, 00:14:47.928 "data_size": 65536 00:14:47.928 }, 00:14:47.928 { 00:14:47.928 "name": "BaseBdev4", 00:14:47.928 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:47.928 "is_configured": true, 00:14:47.928 "data_offset": 0, 00:14:47.928 "data_size": 65536 00:14:47.928 } 00:14:47.928 ] 00:14:47.928 }' 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.928 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.497 [2024-10-30 10:43:09.816112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.497 "name": "Existed_Raid", 00:14:48.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.497 "strip_size_kb": 64, 00:14:48.497 "state": "configuring", 00:14:48.497 "raid_level": "raid0", 00:14:48.497 "superblock": false, 00:14:48.497 "num_base_bdevs": 4, 00:14:48.497 "num_base_bdevs_discovered": 2, 00:14:48.497 "num_base_bdevs_operational": 4, 00:14:48.497 "base_bdevs_list": [ 00:14:48.497 { 00:14:48.497 "name": "BaseBdev1", 00:14:48.497 "uuid": "96e39cdc-5c60-4dfa-afe5-1ec125941de9", 00:14:48.497 "is_configured": true, 00:14:48.497 "data_offset": 0, 00:14:48.497 "data_size": 65536 00:14:48.497 }, 00:14:48.497 { 00:14:48.497 "name": null, 00:14:48.497 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:48.497 "is_configured": false, 00:14:48.497 "data_offset": 0, 00:14:48.497 "data_size": 65536 00:14:48.497 }, 00:14:48.497 { 00:14:48.497 "name": null, 00:14:48.497 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:48.497 "is_configured": false, 00:14:48.497 "data_offset": 0, 00:14:48.497 "data_size": 65536 00:14:48.497 }, 00:14:48.497 { 00:14:48.497 "name": "BaseBdev4", 00:14:48.497 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:48.497 "is_configured": true, 00:14:48.497 "data_offset": 0, 00:14:48.497 "data_size": 65536 00:14:48.497 } 00:14:48.497 ] 00:14:48.497 }' 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.497 10:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.062 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.062 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.062 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.063 [2024-10-30 10:43:10.384261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.063 "name": "Existed_Raid", 00:14:49.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.063 "strip_size_kb": 64, 00:14:49.063 "state": "configuring", 00:14:49.063 "raid_level": "raid0", 00:14:49.063 "superblock": false, 00:14:49.063 "num_base_bdevs": 4, 00:14:49.063 "num_base_bdevs_discovered": 3, 00:14:49.063 "num_base_bdevs_operational": 4, 00:14:49.063 "base_bdevs_list": [ 00:14:49.063 { 00:14:49.063 "name": "BaseBdev1", 00:14:49.063 "uuid": "96e39cdc-5c60-4dfa-afe5-1ec125941de9", 00:14:49.063 "is_configured": true, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 65536 00:14:49.063 }, 00:14:49.063 { 00:14:49.063 "name": null, 00:14:49.063 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:49.063 "is_configured": false, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 65536 00:14:49.063 }, 00:14:49.063 { 00:14:49.063 "name": "BaseBdev3", 00:14:49.063 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:49.063 "is_configured": true, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 65536 00:14:49.063 }, 00:14:49.063 { 00:14:49.063 "name": "BaseBdev4", 00:14:49.063 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:49.063 "is_configured": true, 00:14:49.063 "data_offset": 0, 00:14:49.063 "data_size": 65536 00:14:49.063 } 00:14:49.063 ] 00:14:49.063 }' 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.063 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:49.629 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.629 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.629 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.629 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:49.629 10:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:49.629 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.629 10:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.629 [2024-10-30 10:43:10.948617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.629 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.630 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.630 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.630 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.630 "name": "Existed_Raid", 00:14:49.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.630 "strip_size_kb": 64, 00:14:49.630 "state": "configuring", 00:14:49.630 "raid_level": "raid0", 00:14:49.630 "superblock": false, 00:14:49.630 "num_base_bdevs": 4, 00:14:49.630 "num_base_bdevs_discovered": 2, 00:14:49.630 "num_base_bdevs_operational": 4, 00:14:49.630 "base_bdevs_list": [ 00:14:49.630 { 00:14:49.630 "name": null, 00:14:49.630 "uuid": "96e39cdc-5c60-4dfa-afe5-1ec125941de9", 00:14:49.630 "is_configured": false, 00:14:49.630 "data_offset": 0, 00:14:49.630 "data_size": 65536 00:14:49.630 }, 00:14:49.630 { 00:14:49.630 "name": null, 00:14:49.630 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:49.630 "is_configured": false, 00:14:49.630 "data_offset": 0, 00:14:49.630 "data_size": 65536 00:14:49.630 }, 00:14:49.630 { 00:14:49.630 "name": "BaseBdev3", 00:14:49.630 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:49.630 "is_configured": true, 00:14:49.630 "data_offset": 0, 00:14:49.630 "data_size": 65536 00:14:49.630 }, 00:14:49.630 { 00:14:49.630 "name": "BaseBdev4", 00:14:49.630 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:49.630 "is_configured": true, 00:14:49.630 "data_offset": 0, 00:14:49.630 "data_size": 65536 00:14:49.630 } 00:14:49.630 ] 00:14:49.630 }' 00:14:49.630 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.630 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.197 [2024-10-30 10:43:11.599075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.197 "name": "Existed_Raid", 00:14:50.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.197 "strip_size_kb": 64, 00:14:50.197 "state": "configuring", 00:14:50.197 "raid_level": "raid0", 00:14:50.197 "superblock": false, 00:14:50.197 "num_base_bdevs": 4, 00:14:50.197 "num_base_bdevs_discovered": 3, 00:14:50.197 "num_base_bdevs_operational": 4, 00:14:50.197 "base_bdevs_list": [ 00:14:50.197 { 00:14:50.197 "name": null, 00:14:50.197 "uuid": "96e39cdc-5c60-4dfa-afe5-1ec125941de9", 00:14:50.197 "is_configured": false, 00:14:50.197 "data_offset": 0, 00:14:50.197 "data_size": 65536 00:14:50.197 }, 00:14:50.197 { 00:14:50.197 "name": "BaseBdev2", 00:14:50.197 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:50.197 "is_configured": true, 00:14:50.197 "data_offset": 0, 00:14:50.197 "data_size": 65536 00:14:50.197 }, 00:14:50.197 { 00:14:50.197 "name": "BaseBdev3", 00:14:50.197 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:50.197 "is_configured": true, 00:14:50.197 "data_offset": 0, 00:14:50.197 "data_size": 65536 00:14:50.197 }, 00:14:50.197 { 00:14:50.197 "name": "BaseBdev4", 00:14:50.197 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:50.197 "is_configured": true, 00:14:50.197 "data_offset": 0, 00:14:50.197 "data_size": 65536 00:14:50.197 } 00:14:50.197 ] 00:14:50.197 }' 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.197 10:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 96e39cdc-5c60-4dfa-afe5-1ec125941de9 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.764 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.024 [2024-10-30 10:43:12.244399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:51.024 [2024-10-30 10:43:12.244463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:51.024 [2024-10-30 10:43:12.244476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:51.024 [2024-10-30 10:43:12.244826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:51.024 [2024-10-30 10:43:12.245047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:51.024 [2024-10-30 10:43:12.245071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:51.024 [2024-10-30 10:43:12.245389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.024 NewBaseBdev 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.024 [ 00:14:51.024 { 00:14:51.024 "name": "NewBaseBdev", 00:14:51.024 "aliases": [ 00:14:51.024 "96e39cdc-5c60-4dfa-afe5-1ec125941de9" 00:14:51.024 ], 00:14:51.024 "product_name": "Malloc disk", 00:14:51.024 "block_size": 512, 00:14:51.024 "num_blocks": 65536, 00:14:51.024 "uuid": "96e39cdc-5c60-4dfa-afe5-1ec125941de9", 00:14:51.024 "assigned_rate_limits": { 00:14:51.024 "rw_ios_per_sec": 0, 00:14:51.024 "rw_mbytes_per_sec": 0, 00:14:51.024 "r_mbytes_per_sec": 0, 00:14:51.024 "w_mbytes_per_sec": 0 00:14:51.024 }, 00:14:51.024 "claimed": true, 00:14:51.024 "claim_type": "exclusive_write", 00:14:51.024 "zoned": false, 00:14:51.024 "supported_io_types": { 00:14:51.024 "read": true, 00:14:51.024 "write": true, 00:14:51.024 "unmap": true, 00:14:51.024 "flush": true, 00:14:51.024 "reset": true, 00:14:51.024 "nvme_admin": false, 00:14:51.024 "nvme_io": false, 00:14:51.024 "nvme_io_md": false, 00:14:51.024 "write_zeroes": true, 00:14:51.024 "zcopy": true, 00:14:51.024 "get_zone_info": false, 00:14:51.024 "zone_management": false, 00:14:51.024 "zone_append": false, 00:14:51.024 "compare": false, 00:14:51.024 "compare_and_write": false, 00:14:51.024 "abort": true, 00:14:51.024 "seek_hole": false, 00:14:51.024 "seek_data": false, 00:14:51.024 "copy": true, 00:14:51.024 "nvme_iov_md": false 00:14:51.024 }, 00:14:51.024 "memory_domains": [ 00:14:51.024 { 00:14:51.024 "dma_device_id": "system", 00:14:51.024 "dma_device_type": 1 00:14:51.024 }, 00:14:51.024 { 00:14:51.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.024 "dma_device_type": 2 00:14:51.024 } 00:14:51.024 ], 00:14:51.024 "driver_specific": {} 00:14:51.024 } 00:14:51.024 ] 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.024 "name": "Existed_Raid", 00:14:51.024 "uuid": "e4c2eaac-5a61-44b2-af8f-921939277392", 00:14:51.024 "strip_size_kb": 64, 00:14:51.024 "state": "online", 00:14:51.024 "raid_level": "raid0", 00:14:51.024 "superblock": false, 00:14:51.024 "num_base_bdevs": 4, 00:14:51.024 "num_base_bdevs_discovered": 4, 00:14:51.024 "num_base_bdevs_operational": 4, 00:14:51.024 "base_bdevs_list": [ 00:14:51.024 { 00:14:51.024 "name": "NewBaseBdev", 00:14:51.024 "uuid": "96e39cdc-5c60-4dfa-afe5-1ec125941de9", 00:14:51.024 "is_configured": true, 00:14:51.024 "data_offset": 0, 00:14:51.024 "data_size": 65536 00:14:51.024 }, 00:14:51.024 { 00:14:51.024 "name": "BaseBdev2", 00:14:51.024 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:51.024 "is_configured": true, 00:14:51.024 "data_offset": 0, 00:14:51.024 "data_size": 65536 00:14:51.024 }, 00:14:51.024 { 00:14:51.024 "name": "BaseBdev3", 00:14:51.024 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:51.024 "is_configured": true, 00:14:51.024 "data_offset": 0, 00:14:51.024 "data_size": 65536 00:14:51.024 }, 00:14:51.024 { 00:14:51.024 "name": "BaseBdev4", 00:14:51.024 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:51.024 "is_configured": true, 00:14:51.024 "data_offset": 0, 00:14:51.024 "data_size": 65536 00:14:51.024 } 00:14:51.024 ] 00:14:51.024 }' 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.024 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.592 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:51.592 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:51.592 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.593 [2024-10-30 10:43:12.761124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:51.593 "name": "Existed_Raid", 00:14:51.593 "aliases": [ 00:14:51.593 "e4c2eaac-5a61-44b2-af8f-921939277392" 00:14:51.593 ], 00:14:51.593 "product_name": "Raid Volume", 00:14:51.593 "block_size": 512, 00:14:51.593 "num_blocks": 262144, 00:14:51.593 "uuid": "e4c2eaac-5a61-44b2-af8f-921939277392", 00:14:51.593 "assigned_rate_limits": { 00:14:51.593 "rw_ios_per_sec": 0, 00:14:51.593 "rw_mbytes_per_sec": 0, 00:14:51.593 "r_mbytes_per_sec": 0, 00:14:51.593 "w_mbytes_per_sec": 0 00:14:51.593 }, 00:14:51.593 "claimed": false, 00:14:51.593 "zoned": false, 00:14:51.593 "supported_io_types": { 00:14:51.593 "read": true, 00:14:51.593 "write": true, 00:14:51.593 "unmap": true, 00:14:51.593 "flush": true, 00:14:51.593 "reset": true, 00:14:51.593 "nvme_admin": false, 00:14:51.593 "nvme_io": false, 00:14:51.593 "nvme_io_md": false, 00:14:51.593 "write_zeroes": true, 00:14:51.593 "zcopy": false, 00:14:51.593 "get_zone_info": false, 00:14:51.593 "zone_management": false, 00:14:51.593 "zone_append": false, 00:14:51.593 "compare": false, 00:14:51.593 "compare_and_write": false, 00:14:51.593 "abort": false, 00:14:51.593 "seek_hole": false, 00:14:51.593 "seek_data": false, 00:14:51.593 "copy": false, 00:14:51.593 "nvme_iov_md": false 00:14:51.593 }, 00:14:51.593 "memory_domains": [ 00:14:51.593 { 00:14:51.593 "dma_device_id": "system", 00:14:51.593 "dma_device_type": 1 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.593 "dma_device_type": 2 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "dma_device_id": "system", 00:14:51.593 "dma_device_type": 1 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.593 "dma_device_type": 2 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "dma_device_id": "system", 00:14:51.593 "dma_device_type": 1 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.593 "dma_device_type": 2 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "dma_device_id": "system", 00:14:51.593 "dma_device_type": 1 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.593 "dma_device_type": 2 00:14:51.593 } 00:14:51.593 ], 00:14:51.593 "driver_specific": { 00:14:51.593 "raid": { 00:14:51.593 "uuid": "e4c2eaac-5a61-44b2-af8f-921939277392", 00:14:51.593 "strip_size_kb": 64, 00:14:51.593 "state": "online", 00:14:51.593 "raid_level": "raid0", 00:14:51.593 "superblock": false, 00:14:51.593 "num_base_bdevs": 4, 00:14:51.593 "num_base_bdevs_discovered": 4, 00:14:51.593 "num_base_bdevs_operational": 4, 00:14:51.593 "base_bdevs_list": [ 00:14:51.593 { 00:14:51.593 "name": "NewBaseBdev", 00:14:51.593 "uuid": "96e39cdc-5c60-4dfa-afe5-1ec125941de9", 00:14:51.593 "is_configured": true, 00:14:51.593 "data_offset": 0, 00:14:51.593 "data_size": 65536 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "name": "BaseBdev2", 00:14:51.593 "uuid": "6274db37-889b-4e1f-8135-dd55371313ee", 00:14:51.593 "is_configured": true, 00:14:51.593 "data_offset": 0, 00:14:51.593 "data_size": 65536 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "name": "BaseBdev3", 00:14:51.593 "uuid": "bd80cc21-c5ef-4627-85fb-a594a1015f37", 00:14:51.593 "is_configured": true, 00:14:51.593 "data_offset": 0, 00:14:51.593 "data_size": 65536 00:14:51.593 }, 00:14:51.593 { 00:14:51.593 "name": "BaseBdev4", 00:14:51.593 "uuid": "36e3b5a5-ed75-4273-acc1-d8bb328780ed", 00:14:51.593 "is_configured": true, 00:14:51.593 "data_offset": 0, 00:14:51.593 "data_size": 65536 00:14:51.593 } 00:14:51.593 ] 00:14:51.593 } 00:14:51.593 } 00:14:51.593 }' 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:51.593 BaseBdev2 00:14:51.593 BaseBdev3 00:14:51.593 BaseBdev4' 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.593 10:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.593 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.593 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.593 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.593 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:51.593 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.593 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.593 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.593 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.853 [2024-10-30 10:43:13.116738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.853 [2024-10-30 10:43:13.116776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.853 [2024-10-30 10:43:13.116870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.853 [2024-10-30 10:43:13.117011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.853 [2024-10-30 10:43:13.117034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69634 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69634 ']' 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69634 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69634 00:14:51.853 killing process with pid 69634 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69634' 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69634 00:14:51.853 [2024-10-30 10:43:13.153054] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.853 10:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69634 00:14:52.112 [2024-10-30 10:43:13.527582] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:53.488 00:14:53.488 real 0m12.691s 00:14:53.488 user 0m21.050s 00:14:53.488 sys 0m1.731s 00:14:53.488 ************************************ 00:14:53.488 END TEST raid_state_function_test 00:14:53.488 ************************************ 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.488 10:43:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:14:53.488 10:43:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:53.488 10:43:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:53.488 10:43:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.488 ************************************ 00:14:53.488 START TEST raid_state_function_test_sb 00:14:53.488 ************************************ 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:53.488 Process raid pid: 70320 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70320 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70320' 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70320 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70320 ']' 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:53.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:53.488 10:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.488 [2024-10-30 10:43:14.741936] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:14:53.488 [2024-10-30 10:43:14.742120] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.488 [2024-10-30 10:43:14.929406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.751 [2024-10-30 10:43:15.060624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.010 [2024-10-30 10:43:15.269274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.010 [2024-10-30 10:43:15.269352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.269 [2024-10-30 10:43:15.692455] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.269 [2024-10-30 10:43:15.692525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.269 [2024-10-30 10:43:15.692549] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.269 [2024-10-30 10:43:15.692584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.269 [2024-10-30 10:43:15.692599] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.269 [2024-10-30 10:43:15.692621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.269 [2024-10-30 10:43:15.692636] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:54.269 [2024-10-30 10:43:15.692657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.269 10:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.528 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.528 "name": "Existed_Raid", 00:14:54.528 "uuid": "0d33363f-e0b4-45d2-9c10-27b70a316418", 00:14:54.528 "strip_size_kb": 64, 00:14:54.528 "state": "configuring", 00:14:54.528 "raid_level": "raid0", 00:14:54.528 "superblock": true, 00:14:54.528 "num_base_bdevs": 4, 00:14:54.528 "num_base_bdevs_discovered": 0, 00:14:54.528 "num_base_bdevs_operational": 4, 00:14:54.528 "base_bdevs_list": [ 00:14:54.528 { 00:14:54.528 "name": "BaseBdev1", 00:14:54.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.528 "is_configured": false, 00:14:54.528 "data_offset": 0, 00:14:54.528 "data_size": 0 00:14:54.528 }, 00:14:54.528 { 00:14:54.528 "name": "BaseBdev2", 00:14:54.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.528 "is_configured": false, 00:14:54.528 "data_offset": 0, 00:14:54.528 "data_size": 0 00:14:54.528 }, 00:14:54.528 { 00:14:54.528 "name": "BaseBdev3", 00:14:54.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.528 "is_configured": false, 00:14:54.528 "data_offset": 0, 00:14:54.528 "data_size": 0 00:14:54.528 }, 00:14:54.528 { 00:14:54.528 "name": "BaseBdev4", 00:14:54.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.528 "is_configured": false, 00:14:54.528 "data_offset": 0, 00:14:54.528 "data_size": 0 00:14:54.528 } 00:14:54.528 ] 00:14:54.528 }' 00:14:54.528 10:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.528 10:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 [2024-10-30 10:43:16.192501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.788 [2024-10-30 10:43:16.192721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 [2024-10-30 10:43:16.200498] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.788 [2024-10-30 10:43:16.200557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.788 [2024-10-30 10:43:16.200580] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.788 [2024-10-30 10:43:16.200604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.788 [2024-10-30 10:43:16.200620] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.788 [2024-10-30 10:43:16.200641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.788 [2024-10-30 10:43:16.200657] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:54.788 [2024-10-30 10:43:16.200680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.788 [2024-10-30 10:43:16.246041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.788 BaseBdev1 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.788 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.047 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.047 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.047 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.047 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.047 [ 00:14:55.047 { 00:14:55.047 "name": "BaseBdev1", 00:14:55.047 "aliases": [ 00:14:55.047 "860136cb-05e5-4ebf-a79e-4bfbbf966ffe" 00:14:55.047 ], 00:14:55.047 "product_name": "Malloc disk", 00:14:55.047 "block_size": 512, 00:14:55.047 "num_blocks": 65536, 00:14:55.047 "uuid": "860136cb-05e5-4ebf-a79e-4bfbbf966ffe", 00:14:55.047 "assigned_rate_limits": { 00:14:55.047 "rw_ios_per_sec": 0, 00:14:55.047 "rw_mbytes_per_sec": 0, 00:14:55.047 "r_mbytes_per_sec": 0, 00:14:55.047 "w_mbytes_per_sec": 0 00:14:55.047 }, 00:14:55.047 "claimed": true, 00:14:55.047 "claim_type": "exclusive_write", 00:14:55.047 "zoned": false, 00:14:55.047 "supported_io_types": { 00:14:55.047 "read": true, 00:14:55.047 "write": true, 00:14:55.047 "unmap": true, 00:14:55.047 "flush": true, 00:14:55.047 "reset": true, 00:14:55.048 "nvme_admin": false, 00:14:55.048 "nvme_io": false, 00:14:55.048 "nvme_io_md": false, 00:14:55.048 "write_zeroes": true, 00:14:55.048 "zcopy": true, 00:14:55.048 "get_zone_info": false, 00:14:55.048 "zone_management": false, 00:14:55.048 "zone_append": false, 00:14:55.048 "compare": false, 00:14:55.048 "compare_and_write": false, 00:14:55.048 "abort": true, 00:14:55.048 "seek_hole": false, 00:14:55.048 "seek_data": false, 00:14:55.048 "copy": true, 00:14:55.048 "nvme_iov_md": false 00:14:55.048 }, 00:14:55.048 "memory_domains": [ 00:14:55.048 { 00:14:55.048 "dma_device_id": "system", 00:14:55.048 "dma_device_type": 1 00:14:55.048 }, 00:14:55.048 { 00:14:55.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.048 "dma_device_type": 2 00:14:55.048 } 00:14:55.048 ], 00:14:55.048 "driver_specific": {} 00:14:55.048 } 00:14:55.048 ] 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.048 "name": "Existed_Raid", 00:14:55.048 "uuid": "b537374b-0591-4caa-90a8-38f93823c8c6", 00:14:55.048 "strip_size_kb": 64, 00:14:55.048 "state": "configuring", 00:14:55.048 "raid_level": "raid0", 00:14:55.048 "superblock": true, 00:14:55.048 "num_base_bdevs": 4, 00:14:55.048 "num_base_bdevs_discovered": 1, 00:14:55.048 "num_base_bdevs_operational": 4, 00:14:55.048 "base_bdevs_list": [ 00:14:55.048 { 00:14:55.048 "name": "BaseBdev1", 00:14:55.048 "uuid": "860136cb-05e5-4ebf-a79e-4bfbbf966ffe", 00:14:55.048 "is_configured": true, 00:14:55.048 "data_offset": 2048, 00:14:55.048 "data_size": 63488 00:14:55.048 }, 00:14:55.048 { 00:14:55.048 "name": "BaseBdev2", 00:14:55.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.048 "is_configured": false, 00:14:55.048 "data_offset": 0, 00:14:55.048 "data_size": 0 00:14:55.048 }, 00:14:55.048 { 00:14:55.048 "name": "BaseBdev3", 00:14:55.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.048 "is_configured": false, 00:14:55.048 "data_offset": 0, 00:14:55.048 "data_size": 0 00:14:55.048 }, 00:14:55.048 { 00:14:55.048 "name": "BaseBdev4", 00:14:55.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.048 "is_configured": false, 00:14:55.048 "data_offset": 0, 00:14:55.048 "data_size": 0 00:14:55.048 } 00:14:55.048 ] 00:14:55.048 }' 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.048 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.617 [2024-10-30 10:43:16.786346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.617 [2024-10-30 10:43:16.786410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.617 [2024-10-30 10:43:16.798464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.617 [2024-10-30 10:43:16.801160] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.617 [2024-10-30 10:43:16.801396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.617 [2024-10-30 10:43:16.801565] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.617 [2024-10-30 10:43:16.801760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.617 [2024-10-30 10:43:16.801944] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:55.617 [2024-10-30 10:43:16.802021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.617 "name": "Existed_Raid", 00:14:55.617 "uuid": "03cc7c12-1e6a-44a9-95ed-5a565d78c008", 00:14:55.617 "strip_size_kb": 64, 00:14:55.617 "state": "configuring", 00:14:55.617 "raid_level": "raid0", 00:14:55.617 "superblock": true, 00:14:55.617 "num_base_bdevs": 4, 00:14:55.617 "num_base_bdevs_discovered": 1, 00:14:55.617 "num_base_bdevs_operational": 4, 00:14:55.617 "base_bdevs_list": [ 00:14:55.617 { 00:14:55.617 "name": "BaseBdev1", 00:14:55.617 "uuid": "860136cb-05e5-4ebf-a79e-4bfbbf966ffe", 00:14:55.617 "is_configured": true, 00:14:55.617 "data_offset": 2048, 00:14:55.617 "data_size": 63488 00:14:55.617 }, 00:14:55.617 { 00:14:55.617 "name": "BaseBdev2", 00:14:55.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.617 "is_configured": false, 00:14:55.617 "data_offset": 0, 00:14:55.617 "data_size": 0 00:14:55.617 }, 00:14:55.617 { 00:14:55.617 "name": "BaseBdev3", 00:14:55.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.617 "is_configured": false, 00:14:55.617 "data_offset": 0, 00:14:55.617 "data_size": 0 00:14:55.617 }, 00:14:55.617 { 00:14:55.617 "name": "BaseBdev4", 00:14:55.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.617 "is_configured": false, 00:14:55.617 "data_offset": 0, 00:14:55.617 "data_size": 0 00:14:55.617 } 00:14:55.617 ] 00:14:55.617 }' 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.617 10:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.876 [2024-10-30 10:43:17.333153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.876 BaseBdev2 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.876 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.135 [ 00:14:56.135 { 00:14:56.135 "name": "BaseBdev2", 00:14:56.135 "aliases": [ 00:14:56.135 "028ff1d4-3c1c-4acd-905e-32f8aa23fa7c" 00:14:56.135 ], 00:14:56.135 "product_name": "Malloc disk", 00:14:56.135 "block_size": 512, 00:14:56.135 "num_blocks": 65536, 00:14:56.135 "uuid": "028ff1d4-3c1c-4acd-905e-32f8aa23fa7c", 00:14:56.135 "assigned_rate_limits": { 00:14:56.135 "rw_ios_per_sec": 0, 00:14:56.135 "rw_mbytes_per_sec": 0, 00:14:56.135 "r_mbytes_per_sec": 0, 00:14:56.135 "w_mbytes_per_sec": 0 00:14:56.135 }, 00:14:56.135 "claimed": true, 00:14:56.135 "claim_type": "exclusive_write", 00:14:56.135 "zoned": false, 00:14:56.135 "supported_io_types": { 00:14:56.135 "read": true, 00:14:56.135 "write": true, 00:14:56.135 "unmap": true, 00:14:56.135 "flush": true, 00:14:56.135 "reset": true, 00:14:56.135 "nvme_admin": false, 00:14:56.135 "nvme_io": false, 00:14:56.135 "nvme_io_md": false, 00:14:56.135 "write_zeroes": true, 00:14:56.135 "zcopy": true, 00:14:56.135 "get_zone_info": false, 00:14:56.135 "zone_management": false, 00:14:56.135 "zone_append": false, 00:14:56.135 "compare": false, 00:14:56.135 "compare_and_write": false, 00:14:56.135 "abort": true, 00:14:56.135 "seek_hole": false, 00:14:56.135 "seek_data": false, 00:14:56.135 "copy": true, 00:14:56.135 "nvme_iov_md": false 00:14:56.135 }, 00:14:56.135 "memory_domains": [ 00:14:56.135 { 00:14:56.135 "dma_device_id": "system", 00:14:56.135 "dma_device_type": 1 00:14:56.135 }, 00:14:56.135 { 00:14:56.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.135 "dma_device_type": 2 00:14:56.135 } 00:14:56.135 ], 00:14:56.135 "driver_specific": {} 00:14:56.135 } 00:14:56.135 ] 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.135 "name": "Existed_Raid", 00:14:56.135 "uuid": "03cc7c12-1e6a-44a9-95ed-5a565d78c008", 00:14:56.135 "strip_size_kb": 64, 00:14:56.135 "state": "configuring", 00:14:56.135 "raid_level": "raid0", 00:14:56.135 "superblock": true, 00:14:56.135 "num_base_bdevs": 4, 00:14:56.135 "num_base_bdevs_discovered": 2, 00:14:56.135 "num_base_bdevs_operational": 4, 00:14:56.135 "base_bdevs_list": [ 00:14:56.135 { 00:14:56.135 "name": "BaseBdev1", 00:14:56.135 "uuid": "860136cb-05e5-4ebf-a79e-4bfbbf966ffe", 00:14:56.135 "is_configured": true, 00:14:56.135 "data_offset": 2048, 00:14:56.135 "data_size": 63488 00:14:56.135 }, 00:14:56.135 { 00:14:56.135 "name": "BaseBdev2", 00:14:56.135 "uuid": "028ff1d4-3c1c-4acd-905e-32f8aa23fa7c", 00:14:56.135 "is_configured": true, 00:14:56.135 "data_offset": 2048, 00:14:56.135 "data_size": 63488 00:14:56.135 }, 00:14:56.135 { 00:14:56.135 "name": "BaseBdev3", 00:14:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.135 "is_configured": false, 00:14:56.135 "data_offset": 0, 00:14:56.135 "data_size": 0 00:14:56.135 }, 00:14:56.135 { 00:14:56.135 "name": "BaseBdev4", 00:14:56.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.135 "is_configured": false, 00:14:56.135 "data_offset": 0, 00:14:56.135 "data_size": 0 00:14:56.135 } 00:14:56.135 ] 00:14:56.135 }' 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.135 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.704 [2024-10-30 10:43:17.948507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.704 BaseBdev3 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.704 [ 00:14:56.704 { 00:14:56.704 "name": "BaseBdev3", 00:14:56.704 "aliases": [ 00:14:56.704 "1603ebab-41cc-403f-bd31-d7a94e34214e" 00:14:56.704 ], 00:14:56.704 "product_name": "Malloc disk", 00:14:56.704 "block_size": 512, 00:14:56.704 "num_blocks": 65536, 00:14:56.704 "uuid": "1603ebab-41cc-403f-bd31-d7a94e34214e", 00:14:56.704 "assigned_rate_limits": { 00:14:56.704 "rw_ios_per_sec": 0, 00:14:56.704 "rw_mbytes_per_sec": 0, 00:14:56.704 "r_mbytes_per_sec": 0, 00:14:56.704 "w_mbytes_per_sec": 0 00:14:56.704 }, 00:14:56.704 "claimed": true, 00:14:56.704 "claim_type": "exclusive_write", 00:14:56.704 "zoned": false, 00:14:56.704 "supported_io_types": { 00:14:56.704 "read": true, 00:14:56.704 "write": true, 00:14:56.704 "unmap": true, 00:14:56.704 "flush": true, 00:14:56.704 "reset": true, 00:14:56.704 "nvme_admin": false, 00:14:56.704 "nvme_io": false, 00:14:56.704 "nvme_io_md": false, 00:14:56.704 "write_zeroes": true, 00:14:56.704 "zcopy": true, 00:14:56.704 "get_zone_info": false, 00:14:56.704 "zone_management": false, 00:14:56.704 "zone_append": false, 00:14:56.704 "compare": false, 00:14:56.704 "compare_and_write": false, 00:14:56.704 "abort": true, 00:14:56.704 "seek_hole": false, 00:14:56.704 "seek_data": false, 00:14:56.704 "copy": true, 00:14:56.704 "nvme_iov_md": false 00:14:56.704 }, 00:14:56.704 "memory_domains": [ 00:14:56.704 { 00:14:56.704 "dma_device_id": "system", 00:14:56.704 "dma_device_type": 1 00:14:56.704 }, 00:14:56.704 { 00:14:56.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.704 "dma_device_type": 2 00:14:56.704 } 00:14:56.704 ], 00:14:56.704 "driver_specific": {} 00:14:56.704 } 00:14:56.704 ] 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.704 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.705 10:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.705 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.705 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.705 "name": "Existed_Raid", 00:14:56.705 "uuid": "03cc7c12-1e6a-44a9-95ed-5a565d78c008", 00:14:56.705 "strip_size_kb": 64, 00:14:56.705 "state": "configuring", 00:14:56.705 "raid_level": "raid0", 00:14:56.705 "superblock": true, 00:14:56.705 "num_base_bdevs": 4, 00:14:56.705 "num_base_bdevs_discovered": 3, 00:14:56.705 "num_base_bdevs_operational": 4, 00:14:56.705 "base_bdevs_list": [ 00:14:56.705 { 00:14:56.705 "name": "BaseBdev1", 00:14:56.705 "uuid": "860136cb-05e5-4ebf-a79e-4bfbbf966ffe", 00:14:56.705 "is_configured": true, 00:14:56.705 "data_offset": 2048, 00:14:56.705 "data_size": 63488 00:14:56.705 }, 00:14:56.705 { 00:14:56.705 "name": "BaseBdev2", 00:14:56.705 "uuid": "028ff1d4-3c1c-4acd-905e-32f8aa23fa7c", 00:14:56.705 "is_configured": true, 00:14:56.705 "data_offset": 2048, 00:14:56.705 "data_size": 63488 00:14:56.705 }, 00:14:56.705 { 00:14:56.705 "name": "BaseBdev3", 00:14:56.705 "uuid": "1603ebab-41cc-403f-bd31-d7a94e34214e", 00:14:56.705 "is_configured": true, 00:14:56.705 "data_offset": 2048, 00:14:56.705 "data_size": 63488 00:14:56.705 }, 00:14:56.705 { 00:14:56.705 "name": "BaseBdev4", 00:14:56.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.705 "is_configured": false, 00:14:56.705 "data_offset": 0, 00:14:56.705 "data_size": 0 00:14:56.705 } 00:14:56.705 ] 00:14:56.705 }' 00:14:56.705 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.705 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.273 [2024-10-30 10:43:18.507880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:57.273 [2024-10-30 10:43:18.508420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:57.273 [2024-10-30 10:43:18.508448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:57.273 BaseBdev4 00:14:57.273 [2024-10-30 10:43:18.508782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:57.273 [2024-10-30 10:43:18.509001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:57.273 [2024-10-30 10:43:18.509031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.273 [2024-10-30 10:43:18.509212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.273 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.273 [ 00:14:57.273 { 00:14:57.273 "name": "BaseBdev4", 00:14:57.273 "aliases": [ 00:14:57.273 "4759d4f6-3264-4745-8ae7-a6df00640686" 00:14:57.273 ], 00:14:57.273 "product_name": "Malloc disk", 00:14:57.273 "block_size": 512, 00:14:57.273 "num_blocks": 65536, 00:14:57.273 "uuid": "4759d4f6-3264-4745-8ae7-a6df00640686", 00:14:57.273 "assigned_rate_limits": { 00:14:57.273 "rw_ios_per_sec": 0, 00:14:57.273 "rw_mbytes_per_sec": 0, 00:14:57.273 "r_mbytes_per_sec": 0, 00:14:57.273 "w_mbytes_per_sec": 0 00:14:57.273 }, 00:14:57.273 "claimed": true, 00:14:57.273 "claim_type": "exclusive_write", 00:14:57.273 "zoned": false, 00:14:57.273 "supported_io_types": { 00:14:57.273 "read": true, 00:14:57.273 "write": true, 00:14:57.273 "unmap": true, 00:14:57.273 "flush": true, 00:14:57.273 "reset": true, 00:14:57.273 "nvme_admin": false, 00:14:57.273 "nvme_io": false, 00:14:57.273 "nvme_io_md": false, 00:14:57.273 "write_zeroes": true, 00:14:57.273 "zcopy": true, 00:14:57.273 "get_zone_info": false, 00:14:57.273 "zone_management": false, 00:14:57.274 "zone_append": false, 00:14:57.274 "compare": false, 00:14:57.274 "compare_and_write": false, 00:14:57.274 "abort": true, 00:14:57.274 "seek_hole": false, 00:14:57.274 "seek_data": false, 00:14:57.274 "copy": true, 00:14:57.274 "nvme_iov_md": false 00:14:57.274 }, 00:14:57.274 "memory_domains": [ 00:14:57.274 { 00:14:57.274 "dma_device_id": "system", 00:14:57.274 "dma_device_type": 1 00:14:57.274 }, 00:14:57.274 { 00:14:57.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.274 "dma_device_type": 2 00:14:57.274 } 00:14:57.274 ], 00:14:57.274 "driver_specific": {} 00:14:57.274 } 00:14:57.274 ] 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.274 "name": "Existed_Raid", 00:14:57.274 "uuid": "03cc7c12-1e6a-44a9-95ed-5a565d78c008", 00:14:57.274 "strip_size_kb": 64, 00:14:57.274 "state": "online", 00:14:57.274 "raid_level": "raid0", 00:14:57.274 "superblock": true, 00:14:57.274 "num_base_bdevs": 4, 00:14:57.274 "num_base_bdevs_discovered": 4, 00:14:57.274 "num_base_bdevs_operational": 4, 00:14:57.274 "base_bdevs_list": [ 00:14:57.274 { 00:14:57.274 "name": "BaseBdev1", 00:14:57.274 "uuid": "860136cb-05e5-4ebf-a79e-4bfbbf966ffe", 00:14:57.274 "is_configured": true, 00:14:57.274 "data_offset": 2048, 00:14:57.274 "data_size": 63488 00:14:57.274 }, 00:14:57.274 { 00:14:57.274 "name": "BaseBdev2", 00:14:57.274 "uuid": "028ff1d4-3c1c-4acd-905e-32f8aa23fa7c", 00:14:57.274 "is_configured": true, 00:14:57.274 "data_offset": 2048, 00:14:57.274 "data_size": 63488 00:14:57.274 }, 00:14:57.274 { 00:14:57.274 "name": "BaseBdev3", 00:14:57.274 "uuid": "1603ebab-41cc-403f-bd31-d7a94e34214e", 00:14:57.274 "is_configured": true, 00:14:57.274 "data_offset": 2048, 00:14:57.274 "data_size": 63488 00:14:57.274 }, 00:14:57.274 { 00:14:57.274 "name": "BaseBdev4", 00:14:57.274 "uuid": "4759d4f6-3264-4745-8ae7-a6df00640686", 00:14:57.274 "is_configured": true, 00:14:57.274 "data_offset": 2048, 00:14:57.274 "data_size": 63488 00:14:57.274 } 00:14:57.274 ] 00:14:57.274 }' 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.274 10:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.843 [2024-10-30 10:43:19.060549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.843 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.843 "name": "Existed_Raid", 00:14:57.843 "aliases": [ 00:14:57.843 "03cc7c12-1e6a-44a9-95ed-5a565d78c008" 00:14:57.843 ], 00:14:57.843 "product_name": "Raid Volume", 00:14:57.843 "block_size": 512, 00:14:57.843 "num_blocks": 253952, 00:14:57.843 "uuid": "03cc7c12-1e6a-44a9-95ed-5a565d78c008", 00:14:57.843 "assigned_rate_limits": { 00:14:57.843 "rw_ios_per_sec": 0, 00:14:57.843 "rw_mbytes_per_sec": 0, 00:14:57.843 "r_mbytes_per_sec": 0, 00:14:57.843 "w_mbytes_per_sec": 0 00:14:57.843 }, 00:14:57.843 "claimed": false, 00:14:57.843 "zoned": false, 00:14:57.843 "supported_io_types": { 00:14:57.843 "read": true, 00:14:57.843 "write": true, 00:14:57.843 "unmap": true, 00:14:57.843 "flush": true, 00:14:57.843 "reset": true, 00:14:57.843 "nvme_admin": false, 00:14:57.844 "nvme_io": false, 00:14:57.844 "nvme_io_md": false, 00:14:57.844 "write_zeroes": true, 00:14:57.844 "zcopy": false, 00:14:57.844 "get_zone_info": false, 00:14:57.844 "zone_management": false, 00:14:57.844 "zone_append": false, 00:14:57.844 "compare": false, 00:14:57.844 "compare_and_write": false, 00:14:57.844 "abort": false, 00:14:57.844 "seek_hole": false, 00:14:57.844 "seek_data": false, 00:14:57.844 "copy": false, 00:14:57.844 "nvme_iov_md": false 00:14:57.844 }, 00:14:57.844 "memory_domains": [ 00:14:57.844 { 00:14:57.844 "dma_device_id": "system", 00:14:57.844 "dma_device_type": 1 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.844 "dma_device_type": 2 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "dma_device_id": "system", 00:14:57.844 "dma_device_type": 1 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.844 "dma_device_type": 2 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "dma_device_id": "system", 00:14:57.844 "dma_device_type": 1 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.844 "dma_device_type": 2 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "dma_device_id": "system", 00:14:57.844 "dma_device_type": 1 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.844 "dma_device_type": 2 00:14:57.844 } 00:14:57.844 ], 00:14:57.844 "driver_specific": { 00:14:57.844 "raid": { 00:14:57.844 "uuid": "03cc7c12-1e6a-44a9-95ed-5a565d78c008", 00:14:57.844 "strip_size_kb": 64, 00:14:57.844 "state": "online", 00:14:57.844 "raid_level": "raid0", 00:14:57.844 "superblock": true, 00:14:57.844 "num_base_bdevs": 4, 00:14:57.844 "num_base_bdevs_discovered": 4, 00:14:57.844 "num_base_bdevs_operational": 4, 00:14:57.844 "base_bdevs_list": [ 00:14:57.844 { 00:14:57.844 "name": "BaseBdev1", 00:14:57.844 "uuid": "860136cb-05e5-4ebf-a79e-4bfbbf966ffe", 00:14:57.844 "is_configured": true, 00:14:57.844 "data_offset": 2048, 00:14:57.844 "data_size": 63488 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "name": "BaseBdev2", 00:14:57.844 "uuid": "028ff1d4-3c1c-4acd-905e-32f8aa23fa7c", 00:14:57.844 "is_configured": true, 00:14:57.844 "data_offset": 2048, 00:14:57.844 "data_size": 63488 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "name": "BaseBdev3", 00:14:57.844 "uuid": "1603ebab-41cc-403f-bd31-d7a94e34214e", 00:14:57.844 "is_configured": true, 00:14:57.844 "data_offset": 2048, 00:14:57.844 "data_size": 63488 00:14:57.844 }, 00:14:57.844 { 00:14:57.844 "name": "BaseBdev4", 00:14:57.844 "uuid": "4759d4f6-3264-4745-8ae7-a6df00640686", 00:14:57.844 "is_configured": true, 00:14:57.844 "data_offset": 2048, 00:14:57.844 "data_size": 63488 00:14:57.844 } 00:14:57.844 ] 00:14:57.844 } 00:14:57.844 } 00:14:57.844 }' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:57.844 BaseBdev2 00:14:57.844 BaseBdev3 00:14:57.844 BaseBdev4' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.844 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.104 [2024-10-30 10:43:19.412335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.104 [2024-10-30 10:43:19.412407] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.104 [2024-10-30 10:43:19.412471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.104 "name": "Existed_Raid", 00:14:58.104 "uuid": "03cc7c12-1e6a-44a9-95ed-5a565d78c008", 00:14:58.104 "strip_size_kb": 64, 00:14:58.104 "state": "offline", 00:14:58.104 "raid_level": "raid0", 00:14:58.104 "superblock": true, 00:14:58.104 "num_base_bdevs": 4, 00:14:58.104 "num_base_bdevs_discovered": 3, 00:14:58.104 "num_base_bdevs_operational": 3, 00:14:58.104 "base_bdevs_list": [ 00:14:58.104 { 00:14:58.104 "name": null, 00:14:58.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.104 "is_configured": false, 00:14:58.104 "data_offset": 0, 00:14:58.104 "data_size": 63488 00:14:58.104 }, 00:14:58.104 { 00:14:58.104 "name": "BaseBdev2", 00:14:58.104 "uuid": "028ff1d4-3c1c-4acd-905e-32f8aa23fa7c", 00:14:58.104 "is_configured": true, 00:14:58.104 "data_offset": 2048, 00:14:58.104 "data_size": 63488 00:14:58.104 }, 00:14:58.104 { 00:14:58.104 "name": "BaseBdev3", 00:14:58.104 "uuid": "1603ebab-41cc-403f-bd31-d7a94e34214e", 00:14:58.104 "is_configured": true, 00:14:58.104 "data_offset": 2048, 00:14:58.104 "data_size": 63488 00:14:58.104 }, 00:14:58.104 { 00:14:58.104 "name": "BaseBdev4", 00:14:58.104 "uuid": "4759d4f6-3264-4745-8ae7-a6df00640686", 00:14:58.104 "is_configured": true, 00:14:58.104 "data_offset": 2048, 00:14:58.104 "data_size": 63488 00:14:58.104 } 00:14:58.104 ] 00:14:58.104 }' 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.104 10:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.669 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.669 [2024-10-30 10:43:20.101395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.928 [2024-10-30 10:43:20.243516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.928 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.928 [2024-10-30 10:43:20.389645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:58.928 [2024-10-30 10:43:20.389708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.186 BaseBdev2 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.186 [ 00:14:59.186 { 00:14:59.186 "name": "BaseBdev2", 00:14:59.186 "aliases": [ 00:14:59.186 "4ad9ded8-95bf-4e76-a687-3a416a41b610" 00:14:59.186 ], 00:14:59.186 "product_name": "Malloc disk", 00:14:59.186 "block_size": 512, 00:14:59.186 "num_blocks": 65536, 00:14:59.186 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:14:59.186 "assigned_rate_limits": { 00:14:59.186 "rw_ios_per_sec": 0, 00:14:59.186 "rw_mbytes_per_sec": 0, 00:14:59.186 "r_mbytes_per_sec": 0, 00:14:59.186 "w_mbytes_per_sec": 0 00:14:59.186 }, 00:14:59.186 "claimed": false, 00:14:59.186 "zoned": false, 00:14:59.186 "supported_io_types": { 00:14:59.186 "read": true, 00:14:59.186 "write": true, 00:14:59.186 "unmap": true, 00:14:59.186 "flush": true, 00:14:59.186 "reset": true, 00:14:59.186 "nvme_admin": false, 00:14:59.186 "nvme_io": false, 00:14:59.186 "nvme_io_md": false, 00:14:59.186 "write_zeroes": true, 00:14:59.186 "zcopy": true, 00:14:59.186 "get_zone_info": false, 00:14:59.186 "zone_management": false, 00:14:59.186 "zone_append": false, 00:14:59.186 "compare": false, 00:14:59.186 "compare_and_write": false, 00:14:59.186 "abort": true, 00:14:59.186 "seek_hole": false, 00:14:59.186 "seek_data": false, 00:14:59.186 "copy": true, 00:14:59.186 "nvme_iov_md": false 00:14:59.186 }, 00:14:59.186 "memory_domains": [ 00:14:59.186 { 00:14:59.186 "dma_device_id": "system", 00:14:59.186 "dma_device_type": 1 00:14:59.186 }, 00:14:59.186 { 00:14:59.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.186 "dma_device_type": 2 00:14:59.186 } 00:14:59.186 ], 00:14:59.186 "driver_specific": {} 00:14:59.186 } 00:14:59.186 ] 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.186 BaseBdev3 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.186 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.443 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.443 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:59.443 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.443 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.443 [ 00:14:59.443 { 00:14:59.443 "name": "BaseBdev3", 00:14:59.443 "aliases": [ 00:14:59.443 "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad" 00:14:59.443 ], 00:14:59.443 "product_name": "Malloc disk", 00:14:59.443 "block_size": 512, 00:14:59.443 "num_blocks": 65536, 00:14:59.443 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:14:59.443 "assigned_rate_limits": { 00:14:59.443 "rw_ios_per_sec": 0, 00:14:59.443 "rw_mbytes_per_sec": 0, 00:14:59.443 "r_mbytes_per_sec": 0, 00:14:59.443 "w_mbytes_per_sec": 0 00:14:59.443 }, 00:14:59.443 "claimed": false, 00:14:59.443 "zoned": false, 00:14:59.443 "supported_io_types": { 00:14:59.443 "read": true, 00:14:59.443 "write": true, 00:14:59.443 "unmap": true, 00:14:59.443 "flush": true, 00:14:59.443 "reset": true, 00:14:59.443 "nvme_admin": false, 00:14:59.443 "nvme_io": false, 00:14:59.443 "nvme_io_md": false, 00:14:59.443 "write_zeroes": true, 00:14:59.443 "zcopy": true, 00:14:59.443 "get_zone_info": false, 00:14:59.443 "zone_management": false, 00:14:59.444 "zone_append": false, 00:14:59.444 "compare": false, 00:14:59.444 "compare_and_write": false, 00:14:59.444 "abort": true, 00:14:59.444 "seek_hole": false, 00:14:59.444 "seek_data": false, 00:14:59.444 "copy": true, 00:14:59.444 "nvme_iov_md": false 00:14:59.444 }, 00:14:59.444 "memory_domains": [ 00:14:59.444 { 00:14:59.444 "dma_device_id": "system", 00:14:59.444 "dma_device_type": 1 00:14:59.444 }, 00:14:59.444 { 00:14:59.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.444 "dma_device_type": 2 00:14:59.444 } 00:14:59.444 ], 00:14:59.444 "driver_specific": {} 00:14:59.444 } 00:14:59.444 ] 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.444 BaseBdev4 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.444 [ 00:14:59.444 { 00:14:59.444 "name": "BaseBdev4", 00:14:59.444 "aliases": [ 00:14:59.444 "af3b544d-85b0-4453-bf64-eb418d215861" 00:14:59.444 ], 00:14:59.444 "product_name": "Malloc disk", 00:14:59.444 "block_size": 512, 00:14:59.444 "num_blocks": 65536, 00:14:59.444 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:14:59.444 "assigned_rate_limits": { 00:14:59.444 "rw_ios_per_sec": 0, 00:14:59.444 "rw_mbytes_per_sec": 0, 00:14:59.444 "r_mbytes_per_sec": 0, 00:14:59.444 "w_mbytes_per_sec": 0 00:14:59.444 }, 00:14:59.444 "claimed": false, 00:14:59.444 "zoned": false, 00:14:59.444 "supported_io_types": { 00:14:59.444 "read": true, 00:14:59.444 "write": true, 00:14:59.444 "unmap": true, 00:14:59.444 "flush": true, 00:14:59.444 "reset": true, 00:14:59.444 "nvme_admin": false, 00:14:59.444 "nvme_io": false, 00:14:59.444 "nvme_io_md": false, 00:14:59.444 "write_zeroes": true, 00:14:59.444 "zcopy": true, 00:14:59.444 "get_zone_info": false, 00:14:59.444 "zone_management": false, 00:14:59.444 "zone_append": false, 00:14:59.444 "compare": false, 00:14:59.444 "compare_and_write": false, 00:14:59.444 "abort": true, 00:14:59.444 "seek_hole": false, 00:14:59.444 "seek_data": false, 00:14:59.444 "copy": true, 00:14:59.444 "nvme_iov_md": false 00:14:59.444 }, 00:14:59.444 "memory_domains": [ 00:14:59.444 { 00:14:59.444 "dma_device_id": "system", 00:14:59.444 "dma_device_type": 1 00:14:59.444 }, 00:14:59.444 { 00:14:59.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.444 "dma_device_type": 2 00:14:59.444 } 00:14:59.444 ], 00:14:59.444 "driver_specific": {} 00:14:59.444 } 00:14:59.444 ] 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.444 [2024-10-30 10:43:20.760455] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.444 [2024-10-30 10:43:20.760649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.444 [2024-10-30 10:43:20.760790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.444 [2024-10-30 10:43:20.763297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.444 [2024-10-30 10:43:20.763506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.444 "name": "Existed_Raid", 00:14:59.444 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:14:59.444 "strip_size_kb": 64, 00:14:59.444 "state": "configuring", 00:14:59.444 "raid_level": "raid0", 00:14:59.444 "superblock": true, 00:14:59.444 "num_base_bdevs": 4, 00:14:59.444 "num_base_bdevs_discovered": 3, 00:14:59.444 "num_base_bdevs_operational": 4, 00:14:59.444 "base_bdevs_list": [ 00:14:59.444 { 00:14:59.444 "name": "BaseBdev1", 00:14:59.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.444 "is_configured": false, 00:14:59.444 "data_offset": 0, 00:14:59.444 "data_size": 0 00:14:59.444 }, 00:14:59.444 { 00:14:59.444 "name": "BaseBdev2", 00:14:59.444 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:14:59.444 "is_configured": true, 00:14:59.444 "data_offset": 2048, 00:14:59.444 "data_size": 63488 00:14:59.444 }, 00:14:59.444 { 00:14:59.444 "name": "BaseBdev3", 00:14:59.444 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:14:59.444 "is_configured": true, 00:14:59.444 "data_offset": 2048, 00:14:59.444 "data_size": 63488 00:14:59.444 }, 00:14:59.444 { 00:14:59.444 "name": "BaseBdev4", 00:14:59.444 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:14:59.444 "is_configured": true, 00:14:59.444 "data_offset": 2048, 00:14:59.444 "data_size": 63488 00:14:59.444 } 00:14:59.444 ] 00:14:59.444 }' 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.444 10:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.011 [2024-10-30 10:43:21.256636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.011 "name": "Existed_Raid", 00:15:00.011 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:15:00.011 "strip_size_kb": 64, 00:15:00.011 "state": "configuring", 00:15:00.011 "raid_level": "raid0", 00:15:00.011 "superblock": true, 00:15:00.011 "num_base_bdevs": 4, 00:15:00.011 "num_base_bdevs_discovered": 2, 00:15:00.011 "num_base_bdevs_operational": 4, 00:15:00.011 "base_bdevs_list": [ 00:15:00.011 { 00:15:00.011 "name": "BaseBdev1", 00:15:00.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.011 "is_configured": false, 00:15:00.011 "data_offset": 0, 00:15:00.011 "data_size": 0 00:15:00.011 }, 00:15:00.011 { 00:15:00.011 "name": null, 00:15:00.011 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:15:00.011 "is_configured": false, 00:15:00.011 "data_offset": 0, 00:15:00.011 "data_size": 63488 00:15:00.011 }, 00:15:00.011 { 00:15:00.011 "name": "BaseBdev3", 00:15:00.011 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:15:00.011 "is_configured": true, 00:15:00.011 "data_offset": 2048, 00:15:00.011 "data_size": 63488 00:15:00.011 }, 00:15:00.011 { 00:15:00.011 "name": "BaseBdev4", 00:15:00.011 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:15:00.011 "is_configured": true, 00:15:00.011 "data_offset": 2048, 00:15:00.011 "data_size": 63488 00:15:00.011 } 00:15:00.011 ] 00:15:00.011 }' 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.011 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.578 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.578 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.578 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.578 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.578 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.578 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:00.578 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.578 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.578 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.578 [2024-10-30 10:43:21.899206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.578 BaseBdev1 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.579 [ 00:15:00.579 { 00:15:00.579 "name": "BaseBdev1", 00:15:00.579 "aliases": [ 00:15:00.579 "f3624585-36ca-406a-acf6-5907d3455eba" 00:15:00.579 ], 00:15:00.579 "product_name": "Malloc disk", 00:15:00.579 "block_size": 512, 00:15:00.579 "num_blocks": 65536, 00:15:00.579 "uuid": "f3624585-36ca-406a-acf6-5907d3455eba", 00:15:00.579 "assigned_rate_limits": { 00:15:00.579 "rw_ios_per_sec": 0, 00:15:00.579 "rw_mbytes_per_sec": 0, 00:15:00.579 "r_mbytes_per_sec": 0, 00:15:00.579 "w_mbytes_per_sec": 0 00:15:00.579 }, 00:15:00.579 "claimed": true, 00:15:00.579 "claim_type": "exclusive_write", 00:15:00.579 "zoned": false, 00:15:00.579 "supported_io_types": { 00:15:00.579 "read": true, 00:15:00.579 "write": true, 00:15:00.579 "unmap": true, 00:15:00.579 "flush": true, 00:15:00.579 "reset": true, 00:15:00.579 "nvme_admin": false, 00:15:00.579 "nvme_io": false, 00:15:00.579 "nvme_io_md": false, 00:15:00.579 "write_zeroes": true, 00:15:00.579 "zcopy": true, 00:15:00.579 "get_zone_info": false, 00:15:00.579 "zone_management": false, 00:15:00.579 "zone_append": false, 00:15:00.579 "compare": false, 00:15:00.579 "compare_and_write": false, 00:15:00.579 "abort": true, 00:15:00.579 "seek_hole": false, 00:15:00.579 "seek_data": false, 00:15:00.579 "copy": true, 00:15:00.579 "nvme_iov_md": false 00:15:00.579 }, 00:15:00.579 "memory_domains": [ 00:15:00.579 { 00:15:00.579 "dma_device_id": "system", 00:15:00.579 "dma_device_type": 1 00:15:00.579 }, 00:15:00.579 { 00:15:00.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.579 "dma_device_type": 2 00:15:00.579 } 00:15:00.579 ], 00:15:00.579 "driver_specific": {} 00:15:00.579 } 00:15:00.579 ] 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.579 "name": "Existed_Raid", 00:15:00.579 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:15:00.579 "strip_size_kb": 64, 00:15:00.579 "state": "configuring", 00:15:00.579 "raid_level": "raid0", 00:15:00.579 "superblock": true, 00:15:00.579 "num_base_bdevs": 4, 00:15:00.579 "num_base_bdevs_discovered": 3, 00:15:00.579 "num_base_bdevs_operational": 4, 00:15:00.579 "base_bdevs_list": [ 00:15:00.579 { 00:15:00.579 "name": "BaseBdev1", 00:15:00.579 "uuid": "f3624585-36ca-406a-acf6-5907d3455eba", 00:15:00.579 "is_configured": true, 00:15:00.579 "data_offset": 2048, 00:15:00.579 "data_size": 63488 00:15:00.579 }, 00:15:00.579 { 00:15:00.579 "name": null, 00:15:00.579 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:15:00.579 "is_configured": false, 00:15:00.579 "data_offset": 0, 00:15:00.579 "data_size": 63488 00:15:00.579 }, 00:15:00.579 { 00:15:00.579 "name": "BaseBdev3", 00:15:00.579 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:15:00.579 "is_configured": true, 00:15:00.579 "data_offset": 2048, 00:15:00.579 "data_size": 63488 00:15:00.579 }, 00:15:00.579 { 00:15:00.579 "name": "BaseBdev4", 00:15:00.579 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:15:00.579 "is_configured": true, 00:15:00.579 "data_offset": 2048, 00:15:00.579 "data_size": 63488 00:15:00.579 } 00:15:00.579 ] 00:15:00.579 }' 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.579 10:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.146 [2024-10-30 10:43:22.527535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.146 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.146 "name": "Existed_Raid", 00:15:01.146 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:15:01.146 "strip_size_kb": 64, 00:15:01.147 "state": "configuring", 00:15:01.147 "raid_level": "raid0", 00:15:01.147 "superblock": true, 00:15:01.147 "num_base_bdevs": 4, 00:15:01.147 "num_base_bdevs_discovered": 2, 00:15:01.147 "num_base_bdevs_operational": 4, 00:15:01.147 "base_bdevs_list": [ 00:15:01.147 { 00:15:01.147 "name": "BaseBdev1", 00:15:01.147 "uuid": "f3624585-36ca-406a-acf6-5907d3455eba", 00:15:01.147 "is_configured": true, 00:15:01.147 "data_offset": 2048, 00:15:01.147 "data_size": 63488 00:15:01.147 }, 00:15:01.147 { 00:15:01.147 "name": null, 00:15:01.147 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:15:01.147 "is_configured": false, 00:15:01.147 "data_offset": 0, 00:15:01.147 "data_size": 63488 00:15:01.147 }, 00:15:01.147 { 00:15:01.147 "name": null, 00:15:01.147 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:15:01.147 "is_configured": false, 00:15:01.147 "data_offset": 0, 00:15:01.147 "data_size": 63488 00:15:01.147 }, 00:15:01.147 { 00:15:01.147 "name": "BaseBdev4", 00:15:01.147 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:15:01.147 "is_configured": true, 00:15:01.147 "data_offset": 2048, 00:15:01.147 "data_size": 63488 00:15:01.147 } 00:15:01.147 ] 00:15:01.147 }' 00:15:01.147 10:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.147 10:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.714 [2024-10-30 10:43:23.091748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.714 "name": "Existed_Raid", 00:15:01.714 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:15:01.714 "strip_size_kb": 64, 00:15:01.714 "state": "configuring", 00:15:01.714 "raid_level": "raid0", 00:15:01.714 "superblock": true, 00:15:01.714 "num_base_bdevs": 4, 00:15:01.714 "num_base_bdevs_discovered": 3, 00:15:01.714 "num_base_bdevs_operational": 4, 00:15:01.714 "base_bdevs_list": [ 00:15:01.714 { 00:15:01.714 "name": "BaseBdev1", 00:15:01.714 "uuid": "f3624585-36ca-406a-acf6-5907d3455eba", 00:15:01.714 "is_configured": true, 00:15:01.714 "data_offset": 2048, 00:15:01.714 "data_size": 63488 00:15:01.714 }, 00:15:01.714 { 00:15:01.714 "name": null, 00:15:01.714 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:15:01.714 "is_configured": false, 00:15:01.714 "data_offset": 0, 00:15:01.714 "data_size": 63488 00:15:01.714 }, 00:15:01.714 { 00:15:01.714 "name": "BaseBdev3", 00:15:01.714 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:15:01.714 "is_configured": true, 00:15:01.714 "data_offset": 2048, 00:15:01.714 "data_size": 63488 00:15:01.714 }, 00:15:01.714 { 00:15:01.714 "name": "BaseBdev4", 00:15:01.714 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:15:01.714 "is_configured": true, 00:15:01.714 "data_offset": 2048, 00:15:01.714 "data_size": 63488 00:15:01.714 } 00:15:01.714 ] 00:15:01.714 }' 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.714 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.378 [2024-10-30 10:43:23.675900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.378 "name": "Existed_Raid", 00:15:02.378 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:15:02.378 "strip_size_kb": 64, 00:15:02.378 "state": "configuring", 00:15:02.378 "raid_level": "raid0", 00:15:02.378 "superblock": true, 00:15:02.378 "num_base_bdevs": 4, 00:15:02.378 "num_base_bdevs_discovered": 2, 00:15:02.378 "num_base_bdevs_operational": 4, 00:15:02.378 "base_bdevs_list": [ 00:15:02.378 { 00:15:02.378 "name": null, 00:15:02.378 "uuid": "f3624585-36ca-406a-acf6-5907d3455eba", 00:15:02.378 "is_configured": false, 00:15:02.378 "data_offset": 0, 00:15:02.378 "data_size": 63488 00:15:02.378 }, 00:15:02.378 { 00:15:02.378 "name": null, 00:15:02.378 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:15:02.378 "is_configured": false, 00:15:02.378 "data_offset": 0, 00:15:02.378 "data_size": 63488 00:15:02.378 }, 00:15:02.378 { 00:15:02.378 "name": "BaseBdev3", 00:15:02.378 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:15:02.378 "is_configured": true, 00:15:02.378 "data_offset": 2048, 00:15:02.378 "data_size": 63488 00:15:02.378 }, 00:15:02.378 { 00:15:02.378 "name": "BaseBdev4", 00:15:02.378 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:15:02.378 "is_configured": true, 00:15:02.378 "data_offset": 2048, 00:15:02.378 "data_size": 63488 00:15:02.378 } 00:15:02.378 ] 00:15:02.378 }' 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.378 10:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.962 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.962 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.962 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.962 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.962 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.962 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:02.962 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:02.962 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.962 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.962 [2024-10-30 10:43:24.352974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.963 "name": "Existed_Raid", 00:15:02.963 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:15:02.963 "strip_size_kb": 64, 00:15:02.963 "state": "configuring", 00:15:02.963 "raid_level": "raid0", 00:15:02.963 "superblock": true, 00:15:02.963 "num_base_bdevs": 4, 00:15:02.963 "num_base_bdevs_discovered": 3, 00:15:02.963 "num_base_bdevs_operational": 4, 00:15:02.963 "base_bdevs_list": [ 00:15:02.963 { 00:15:02.963 "name": null, 00:15:02.963 "uuid": "f3624585-36ca-406a-acf6-5907d3455eba", 00:15:02.963 "is_configured": false, 00:15:02.963 "data_offset": 0, 00:15:02.963 "data_size": 63488 00:15:02.963 }, 00:15:02.963 { 00:15:02.963 "name": "BaseBdev2", 00:15:02.963 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:15:02.963 "is_configured": true, 00:15:02.963 "data_offset": 2048, 00:15:02.963 "data_size": 63488 00:15:02.963 }, 00:15:02.963 { 00:15:02.963 "name": "BaseBdev3", 00:15:02.963 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:15:02.963 "is_configured": true, 00:15:02.963 "data_offset": 2048, 00:15:02.963 "data_size": 63488 00:15:02.963 }, 00:15:02.963 { 00:15:02.963 "name": "BaseBdev4", 00:15:02.963 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:15:02.963 "is_configured": true, 00:15:02.963 "data_offset": 2048, 00:15:02.963 "data_size": 63488 00:15:02.963 } 00:15:02.963 ] 00:15:02.963 }' 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.963 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f3624585-36ca-406a-acf6-5907d3455eba 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.531 10:43:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.790 [2024-10-30 10:43:25.016848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:03.790 [2024-10-30 10:43:25.017401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:03.790 NewBaseBdev 00:15:03.790 [2024-10-30 10:43:25.017599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:03.790 [2024-10-30 10:43:25.017956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:03.790 [2024-10-30 10:43:25.018161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:03.790 [2024-10-30 10:43:25.018184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:03.790 [2024-10-30 10:43:25.018370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.790 [ 00:15:03.790 { 00:15:03.790 "name": "NewBaseBdev", 00:15:03.790 "aliases": [ 00:15:03.790 "f3624585-36ca-406a-acf6-5907d3455eba" 00:15:03.790 ], 00:15:03.790 "product_name": "Malloc disk", 00:15:03.790 "block_size": 512, 00:15:03.790 "num_blocks": 65536, 00:15:03.790 "uuid": "f3624585-36ca-406a-acf6-5907d3455eba", 00:15:03.790 "assigned_rate_limits": { 00:15:03.790 "rw_ios_per_sec": 0, 00:15:03.790 "rw_mbytes_per_sec": 0, 00:15:03.790 "r_mbytes_per_sec": 0, 00:15:03.790 "w_mbytes_per_sec": 0 00:15:03.790 }, 00:15:03.790 "claimed": true, 00:15:03.790 "claim_type": "exclusive_write", 00:15:03.790 "zoned": false, 00:15:03.790 "supported_io_types": { 00:15:03.790 "read": true, 00:15:03.790 "write": true, 00:15:03.790 "unmap": true, 00:15:03.790 "flush": true, 00:15:03.790 "reset": true, 00:15:03.790 "nvme_admin": false, 00:15:03.790 "nvme_io": false, 00:15:03.790 "nvme_io_md": false, 00:15:03.790 "write_zeroes": true, 00:15:03.790 "zcopy": true, 00:15:03.790 "get_zone_info": false, 00:15:03.790 "zone_management": false, 00:15:03.790 "zone_append": false, 00:15:03.790 "compare": false, 00:15:03.790 "compare_and_write": false, 00:15:03.790 "abort": true, 00:15:03.790 "seek_hole": false, 00:15:03.790 "seek_data": false, 00:15:03.790 "copy": true, 00:15:03.790 "nvme_iov_md": false 00:15:03.790 }, 00:15:03.790 "memory_domains": [ 00:15:03.790 { 00:15:03.790 "dma_device_id": "system", 00:15:03.790 "dma_device_type": 1 00:15:03.790 }, 00:15:03.790 { 00:15:03.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.790 "dma_device_type": 2 00:15:03.790 } 00:15:03.790 ], 00:15:03.790 "driver_specific": {} 00:15:03.790 } 00:15:03.790 ] 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.790 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.790 "name": "Existed_Raid", 00:15:03.790 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:15:03.790 "strip_size_kb": 64, 00:15:03.790 "state": "online", 00:15:03.790 "raid_level": "raid0", 00:15:03.790 "superblock": true, 00:15:03.790 "num_base_bdevs": 4, 00:15:03.790 "num_base_bdevs_discovered": 4, 00:15:03.790 "num_base_bdevs_operational": 4, 00:15:03.790 "base_bdevs_list": [ 00:15:03.790 { 00:15:03.790 "name": "NewBaseBdev", 00:15:03.790 "uuid": "f3624585-36ca-406a-acf6-5907d3455eba", 00:15:03.790 "is_configured": true, 00:15:03.790 "data_offset": 2048, 00:15:03.790 "data_size": 63488 00:15:03.790 }, 00:15:03.790 { 00:15:03.790 "name": "BaseBdev2", 00:15:03.790 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:15:03.790 "is_configured": true, 00:15:03.790 "data_offset": 2048, 00:15:03.790 "data_size": 63488 00:15:03.790 }, 00:15:03.790 { 00:15:03.791 "name": "BaseBdev3", 00:15:03.791 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:15:03.791 "is_configured": true, 00:15:03.791 "data_offset": 2048, 00:15:03.791 "data_size": 63488 00:15:03.791 }, 00:15:03.791 { 00:15:03.791 "name": "BaseBdev4", 00:15:03.791 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:15:03.791 "is_configured": true, 00:15:03.791 "data_offset": 2048, 00:15:03.791 "data_size": 63488 00:15:03.791 } 00:15:03.791 ] 00:15:03.791 }' 00:15:03.791 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.791 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.361 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:04.361 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:04.361 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.361 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.361 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.361 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.361 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.362 [2024-10-30 10:43:25.541539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.362 "name": "Existed_Raid", 00:15:04.362 "aliases": [ 00:15:04.362 "73df729b-cedb-49ab-848b-c84b5d7948b5" 00:15:04.362 ], 00:15:04.362 "product_name": "Raid Volume", 00:15:04.362 "block_size": 512, 00:15:04.362 "num_blocks": 253952, 00:15:04.362 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:15:04.362 "assigned_rate_limits": { 00:15:04.362 "rw_ios_per_sec": 0, 00:15:04.362 "rw_mbytes_per_sec": 0, 00:15:04.362 "r_mbytes_per_sec": 0, 00:15:04.362 "w_mbytes_per_sec": 0 00:15:04.362 }, 00:15:04.362 "claimed": false, 00:15:04.362 "zoned": false, 00:15:04.362 "supported_io_types": { 00:15:04.362 "read": true, 00:15:04.362 "write": true, 00:15:04.362 "unmap": true, 00:15:04.362 "flush": true, 00:15:04.362 "reset": true, 00:15:04.362 "nvme_admin": false, 00:15:04.362 "nvme_io": false, 00:15:04.362 "nvme_io_md": false, 00:15:04.362 "write_zeroes": true, 00:15:04.362 "zcopy": false, 00:15:04.362 "get_zone_info": false, 00:15:04.362 "zone_management": false, 00:15:04.362 "zone_append": false, 00:15:04.362 "compare": false, 00:15:04.362 "compare_and_write": false, 00:15:04.362 "abort": false, 00:15:04.362 "seek_hole": false, 00:15:04.362 "seek_data": false, 00:15:04.362 "copy": false, 00:15:04.362 "nvme_iov_md": false 00:15:04.362 }, 00:15:04.362 "memory_domains": [ 00:15:04.362 { 00:15:04.362 "dma_device_id": "system", 00:15:04.362 "dma_device_type": 1 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.362 "dma_device_type": 2 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "dma_device_id": "system", 00:15:04.362 "dma_device_type": 1 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.362 "dma_device_type": 2 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "dma_device_id": "system", 00:15:04.362 "dma_device_type": 1 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.362 "dma_device_type": 2 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "dma_device_id": "system", 00:15:04.362 "dma_device_type": 1 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.362 "dma_device_type": 2 00:15:04.362 } 00:15:04.362 ], 00:15:04.362 "driver_specific": { 00:15:04.362 "raid": { 00:15:04.362 "uuid": "73df729b-cedb-49ab-848b-c84b5d7948b5", 00:15:04.362 "strip_size_kb": 64, 00:15:04.362 "state": "online", 00:15:04.362 "raid_level": "raid0", 00:15:04.362 "superblock": true, 00:15:04.362 "num_base_bdevs": 4, 00:15:04.362 "num_base_bdevs_discovered": 4, 00:15:04.362 "num_base_bdevs_operational": 4, 00:15:04.362 "base_bdevs_list": [ 00:15:04.362 { 00:15:04.362 "name": "NewBaseBdev", 00:15:04.362 "uuid": "f3624585-36ca-406a-acf6-5907d3455eba", 00:15:04.362 "is_configured": true, 00:15:04.362 "data_offset": 2048, 00:15:04.362 "data_size": 63488 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "name": "BaseBdev2", 00:15:04.362 "uuid": "4ad9ded8-95bf-4e76-a687-3a416a41b610", 00:15:04.362 "is_configured": true, 00:15:04.362 "data_offset": 2048, 00:15:04.362 "data_size": 63488 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "name": "BaseBdev3", 00:15:04.362 "uuid": "5d13cd93-d6e9-4b1f-8bcd-6c6d3d0304ad", 00:15:04.362 "is_configured": true, 00:15:04.362 "data_offset": 2048, 00:15:04.362 "data_size": 63488 00:15:04.362 }, 00:15:04.362 { 00:15:04.362 "name": "BaseBdev4", 00:15:04.362 "uuid": "af3b544d-85b0-4453-bf64-eb418d215861", 00:15:04.362 "is_configured": true, 00:15:04.362 "data_offset": 2048, 00:15:04.362 "data_size": 63488 00:15:04.362 } 00:15:04.362 ] 00:15:04.362 } 00:15:04.362 } 00:15:04.362 }' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:04.362 BaseBdev2 00:15:04.362 BaseBdev3 00:15:04.362 BaseBdev4' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.362 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.619 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.619 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.619 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.620 [2024-10-30 10:43:25.897168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.620 [2024-10-30 10:43:25.897205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.620 [2024-10-30 10:43:25.897301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.620 [2024-10-30 10:43:25.897417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.620 [2024-10-30 10:43:25.897433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70320 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70320 ']' 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70320 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70320 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70320' 00:15:04.620 killing process with pid 70320 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70320 00:15:04.620 [2024-10-30 10:43:25.937553] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.620 10:43:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70320 00:15:04.878 [2024-10-30 10:43:26.289908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.254 10:43:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:06.254 00:15:06.254 real 0m12.696s 00:15:06.254 user 0m21.031s 00:15:06.254 sys 0m1.775s 00:15:06.254 10:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:06.254 ************************************ 00:15:06.254 END TEST raid_state_function_test_sb 00:15:06.254 ************************************ 00:15:06.254 10:43:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.254 10:43:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:15:06.254 10:43:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:06.254 10:43:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:06.254 10:43:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.254 ************************************ 00:15:06.254 START TEST raid_superblock_test 00:15:06.254 ************************************ 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:06.254 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71007 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71007 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 71007 ']' 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:06.255 10:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.255 [2024-10-30 10:43:27.474099] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:15:06.255 [2024-10-30 10:43:27.474278] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71007 ] 00:15:06.255 [2024-10-30 10:43:27.647667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.513 [2024-10-30 10:43:27.776415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.513 [2024-10-30 10:43:27.980651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.513 [2024-10-30 10:43:27.980704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.081 malloc1 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.081 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.081 [2024-10-30 10:43:28.546195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.081 [2024-10-30 10:43:28.546290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.081 [2024-10-30 10:43:28.546326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:07.081 [2024-10-30 10:43:28.546341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.081 [2024-10-30 10:43:28.549273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.081 [2024-10-30 10:43:28.549319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.341 pt1 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.341 malloc2 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.341 [2024-10-30 10:43:28.601954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.341 [2024-10-30 10:43:28.602032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.341 [2024-10-30 10:43:28.602064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:07.341 [2024-10-30 10:43:28.602079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.341 [2024-10-30 10:43:28.604830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.341 [2024-10-30 10:43:28.604877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.341 pt2 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.341 malloc3 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.341 [2024-10-30 10:43:28.668842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:07.341 [2024-10-30 10:43:28.669069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.341 [2024-10-30 10:43:28.669127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:07.341 [2024-10-30 10:43:28.669144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.341 [2024-10-30 10:43:28.671868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.341 [2024-10-30 10:43:28.671915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:07.341 pt3 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.341 malloc4 00:15:07.341 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.342 [2024-10-30 10:43:28.721467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:07.342 [2024-10-30 10:43:28.721652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.342 [2024-10-30 10:43:28.721695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:07.342 [2024-10-30 10:43:28.721711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.342 [2024-10-30 10:43:28.724521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.342 [2024-10-30 10:43:28.724567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:07.342 pt4 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.342 [2024-10-30 10:43:28.733504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.342 [2024-10-30 10:43:28.735920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.342 [2024-10-30 10:43:28.736167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:07.342 [2024-10-30 10:43:28.736278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:07.342 [2024-10-30 10:43:28.736524] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:07.342 [2024-10-30 10:43:28.736543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:07.342 [2024-10-30 10:43:28.736867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:07.342 [2024-10-30 10:43:28.737115] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:07.342 [2024-10-30 10:43:28.737137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:07.342 [2024-10-30 10:43:28.737316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.342 "name": "raid_bdev1", 00:15:07.342 "uuid": "1d7cb55b-6406-4e69-9191-98f8dd80042f", 00:15:07.342 "strip_size_kb": 64, 00:15:07.342 "state": "online", 00:15:07.342 "raid_level": "raid0", 00:15:07.342 "superblock": true, 00:15:07.342 "num_base_bdevs": 4, 00:15:07.342 "num_base_bdevs_discovered": 4, 00:15:07.342 "num_base_bdevs_operational": 4, 00:15:07.342 "base_bdevs_list": [ 00:15:07.342 { 00:15:07.342 "name": "pt1", 00:15:07.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.342 "is_configured": true, 00:15:07.342 "data_offset": 2048, 00:15:07.342 "data_size": 63488 00:15:07.342 }, 00:15:07.342 { 00:15:07.342 "name": "pt2", 00:15:07.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.342 "is_configured": true, 00:15:07.342 "data_offset": 2048, 00:15:07.342 "data_size": 63488 00:15:07.342 }, 00:15:07.342 { 00:15:07.342 "name": "pt3", 00:15:07.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.342 "is_configured": true, 00:15:07.342 "data_offset": 2048, 00:15:07.342 "data_size": 63488 00:15:07.342 }, 00:15:07.342 { 00:15:07.342 "name": "pt4", 00:15:07.342 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.342 "is_configured": true, 00:15:07.342 "data_offset": 2048, 00:15:07.342 "data_size": 63488 00:15:07.342 } 00:15:07.342 ] 00:15:07.342 }' 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.342 10:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.985 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:07.985 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:07.985 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:07.985 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:07.985 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:07.985 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.986 [2024-10-30 10:43:29.238107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:07.986 "name": "raid_bdev1", 00:15:07.986 "aliases": [ 00:15:07.986 "1d7cb55b-6406-4e69-9191-98f8dd80042f" 00:15:07.986 ], 00:15:07.986 "product_name": "Raid Volume", 00:15:07.986 "block_size": 512, 00:15:07.986 "num_blocks": 253952, 00:15:07.986 "uuid": "1d7cb55b-6406-4e69-9191-98f8dd80042f", 00:15:07.986 "assigned_rate_limits": { 00:15:07.986 "rw_ios_per_sec": 0, 00:15:07.986 "rw_mbytes_per_sec": 0, 00:15:07.986 "r_mbytes_per_sec": 0, 00:15:07.986 "w_mbytes_per_sec": 0 00:15:07.986 }, 00:15:07.986 "claimed": false, 00:15:07.986 "zoned": false, 00:15:07.986 "supported_io_types": { 00:15:07.986 "read": true, 00:15:07.986 "write": true, 00:15:07.986 "unmap": true, 00:15:07.986 "flush": true, 00:15:07.986 "reset": true, 00:15:07.986 "nvme_admin": false, 00:15:07.986 "nvme_io": false, 00:15:07.986 "nvme_io_md": false, 00:15:07.986 "write_zeroes": true, 00:15:07.986 "zcopy": false, 00:15:07.986 "get_zone_info": false, 00:15:07.986 "zone_management": false, 00:15:07.986 "zone_append": false, 00:15:07.986 "compare": false, 00:15:07.986 "compare_and_write": false, 00:15:07.986 "abort": false, 00:15:07.986 "seek_hole": false, 00:15:07.986 "seek_data": false, 00:15:07.986 "copy": false, 00:15:07.986 "nvme_iov_md": false 00:15:07.986 }, 00:15:07.986 "memory_domains": [ 00:15:07.986 { 00:15:07.986 "dma_device_id": "system", 00:15:07.986 "dma_device_type": 1 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.986 "dma_device_type": 2 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "dma_device_id": "system", 00:15:07.986 "dma_device_type": 1 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.986 "dma_device_type": 2 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "dma_device_id": "system", 00:15:07.986 "dma_device_type": 1 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.986 "dma_device_type": 2 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "dma_device_id": "system", 00:15:07.986 "dma_device_type": 1 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.986 "dma_device_type": 2 00:15:07.986 } 00:15:07.986 ], 00:15:07.986 "driver_specific": { 00:15:07.986 "raid": { 00:15:07.986 "uuid": "1d7cb55b-6406-4e69-9191-98f8dd80042f", 00:15:07.986 "strip_size_kb": 64, 00:15:07.986 "state": "online", 00:15:07.986 "raid_level": "raid0", 00:15:07.986 "superblock": true, 00:15:07.986 "num_base_bdevs": 4, 00:15:07.986 "num_base_bdevs_discovered": 4, 00:15:07.986 "num_base_bdevs_operational": 4, 00:15:07.986 "base_bdevs_list": [ 00:15:07.986 { 00:15:07.986 "name": "pt1", 00:15:07.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.986 "is_configured": true, 00:15:07.986 "data_offset": 2048, 00:15:07.986 "data_size": 63488 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "name": "pt2", 00:15:07.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.986 "is_configured": true, 00:15:07.986 "data_offset": 2048, 00:15:07.986 "data_size": 63488 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "name": "pt3", 00:15:07.986 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.986 "is_configured": true, 00:15:07.986 "data_offset": 2048, 00:15:07.986 "data_size": 63488 00:15:07.986 }, 00:15:07.986 { 00:15:07.986 "name": "pt4", 00:15:07.986 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.986 "is_configured": true, 00:15:07.986 "data_offset": 2048, 00:15:07.986 "data_size": 63488 00:15:07.986 } 00:15:07.986 ] 00:15:07.986 } 00:15:07.986 } 00:15:07.986 }' 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:07.986 pt2 00:15:07.986 pt3 00:15:07.986 pt4' 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.986 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.987 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.987 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 [2024-10-30 10:43:29.594172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1d7cb55b-6406-4e69-9191-98f8dd80042f 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1d7cb55b-6406-4e69-9191-98f8dd80042f ']' 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 [2024-10-30 10:43:29.641797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.246 [2024-10-30 10:43:29.641861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.246 [2024-10-30 10:43:29.641960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.246 [2024-10-30 10:43:29.642075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.246 [2024-10-30 10:43:29.642103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.246 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.505 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.505 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.505 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:08.505 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.505 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.505 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.505 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.506 [2024-10-30 10:43:29.785828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:08.506 [2024-10-30 10:43:29.788344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:08.506 [2024-10-30 10:43:29.788441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:08.506 [2024-10-30 10:43:29.788493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:08.506 [2024-10-30 10:43:29.788563] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:08.506 [2024-10-30 10:43:29.788632] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:08.506 [2024-10-30 10:43:29.788665] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:08.506 [2024-10-30 10:43:29.788695] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:08.506 [2024-10-30 10:43:29.788716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.506 [2024-10-30 10:43:29.788733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:08.506 request: 00:15:08.506 { 00:15:08.506 "name": "raid_bdev1", 00:15:08.506 "raid_level": "raid0", 00:15:08.506 "base_bdevs": [ 00:15:08.506 "malloc1", 00:15:08.506 "malloc2", 00:15:08.506 "malloc3", 00:15:08.506 "malloc4" 00:15:08.506 ], 00:15:08.506 "strip_size_kb": 64, 00:15:08.506 "superblock": false, 00:15:08.506 "method": "bdev_raid_create", 00:15:08.506 "req_id": 1 00:15:08.506 } 00:15:08.506 Got JSON-RPC error response 00:15:08.506 response: 00:15:08.506 { 00:15:08.506 "code": -17, 00:15:08.506 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:08.506 } 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.506 [2024-10-30 10:43:29.849821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.506 [2024-10-30 10:43:29.850023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.506 [2024-10-30 10:43:29.850104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:08.506 [2024-10-30 10:43:29.850215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.506 [2024-10-30 10:43:29.853099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.506 [2024-10-30 10:43:29.853264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.506 [2024-10-30 10:43:29.853466] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:08.506 [2024-10-30 10:43:29.853688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:08.506 pt1 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.506 "name": "raid_bdev1", 00:15:08.506 "uuid": "1d7cb55b-6406-4e69-9191-98f8dd80042f", 00:15:08.506 "strip_size_kb": 64, 00:15:08.506 "state": "configuring", 00:15:08.506 "raid_level": "raid0", 00:15:08.506 "superblock": true, 00:15:08.506 "num_base_bdevs": 4, 00:15:08.506 "num_base_bdevs_discovered": 1, 00:15:08.506 "num_base_bdevs_operational": 4, 00:15:08.506 "base_bdevs_list": [ 00:15:08.506 { 00:15:08.506 "name": "pt1", 00:15:08.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.506 "is_configured": true, 00:15:08.506 "data_offset": 2048, 00:15:08.506 "data_size": 63488 00:15:08.506 }, 00:15:08.506 { 00:15:08.506 "name": null, 00:15:08.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.506 "is_configured": false, 00:15:08.506 "data_offset": 2048, 00:15:08.506 "data_size": 63488 00:15:08.506 }, 00:15:08.506 { 00:15:08.506 "name": null, 00:15:08.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.506 "is_configured": false, 00:15:08.506 "data_offset": 2048, 00:15:08.506 "data_size": 63488 00:15:08.506 }, 00:15:08.506 { 00:15:08.506 "name": null, 00:15:08.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.506 "is_configured": false, 00:15:08.506 "data_offset": 2048, 00:15:08.506 "data_size": 63488 00:15:08.506 } 00:15:08.506 ] 00:15:08.506 }' 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.506 10:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.073 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:09.073 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.073 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.073 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.073 [2024-10-30 10:43:30.346198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.073 [2024-10-30 10:43:30.346294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.073 [2024-10-30 10:43:30.346325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:09.073 [2024-10-30 10:43:30.346343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.073 [2024-10-30 10:43:30.346885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.073 [2024-10-30 10:43:30.346924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.073 [2024-10-30 10:43:30.347051] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.073 [2024-10-30 10:43:30.347091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.073 pt2 00:15:09.073 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.073 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.073 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.074 [2024-10-30 10:43:30.354182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.074 "name": "raid_bdev1", 00:15:09.074 "uuid": "1d7cb55b-6406-4e69-9191-98f8dd80042f", 00:15:09.074 "strip_size_kb": 64, 00:15:09.074 "state": "configuring", 00:15:09.074 "raid_level": "raid0", 00:15:09.074 "superblock": true, 00:15:09.074 "num_base_bdevs": 4, 00:15:09.074 "num_base_bdevs_discovered": 1, 00:15:09.074 "num_base_bdevs_operational": 4, 00:15:09.074 "base_bdevs_list": [ 00:15:09.074 { 00:15:09.074 "name": "pt1", 00:15:09.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.074 "is_configured": true, 00:15:09.074 "data_offset": 2048, 00:15:09.074 "data_size": 63488 00:15:09.074 }, 00:15:09.074 { 00:15:09.074 "name": null, 00:15:09.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.074 "is_configured": false, 00:15:09.074 "data_offset": 0, 00:15:09.074 "data_size": 63488 00:15:09.074 }, 00:15:09.074 { 00:15:09.074 "name": null, 00:15:09.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.074 "is_configured": false, 00:15:09.074 "data_offset": 2048, 00:15:09.074 "data_size": 63488 00:15:09.074 }, 00:15:09.074 { 00:15:09.074 "name": null, 00:15:09.074 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.074 "is_configured": false, 00:15:09.074 "data_offset": 2048, 00:15:09.074 "data_size": 63488 00:15:09.074 } 00:15:09.074 ] 00:15:09.074 }' 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.074 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.642 [2024-10-30 10:43:30.866361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.642 [2024-10-30 10:43:30.866439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.642 [2024-10-30 10:43:30.866472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:09.642 [2024-10-30 10:43:30.866487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.642 [2024-10-30 10:43:30.867097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.642 [2024-10-30 10:43:30.867123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.642 [2024-10-30 10:43:30.867232] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.642 [2024-10-30 10:43:30.867263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.642 pt2 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.642 [2024-10-30 10:43:30.874304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.642 [2024-10-30 10:43:30.874483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.642 [2024-10-30 10:43:30.874529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:09.642 [2024-10-30 10:43:30.874545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.642 [2024-10-30 10:43:30.875043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.642 [2024-10-30 10:43:30.875069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.642 [2024-10-30 10:43:30.875156] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:09.642 [2024-10-30 10:43:30.875184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.642 pt3 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.642 [2024-10-30 10:43:30.886290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:09.642 [2024-10-30 10:43:30.886385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.642 [2024-10-30 10:43:30.886414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:09.642 [2024-10-30 10:43:30.886427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.642 [2024-10-30 10:43:30.886892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.642 [2024-10-30 10:43:30.886928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:09.642 [2024-10-30 10:43:30.887035] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:09.642 [2024-10-30 10:43:30.887065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:09.642 [2024-10-30 10:43:30.887230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:09.642 [2024-10-30 10:43:30.887245] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:09.642 [2024-10-30 10:43:30.887546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:09.642 [2024-10-30 10:43:30.887728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:09.642 [2024-10-30 10:43:30.887750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:09.642 [2024-10-30 10:43:30.887906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.642 pt4 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.642 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.643 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.643 "name": "raid_bdev1", 00:15:09.643 "uuid": "1d7cb55b-6406-4e69-9191-98f8dd80042f", 00:15:09.643 "strip_size_kb": 64, 00:15:09.643 "state": "online", 00:15:09.643 "raid_level": "raid0", 00:15:09.643 "superblock": true, 00:15:09.643 "num_base_bdevs": 4, 00:15:09.643 "num_base_bdevs_discovered": 4, 00:15:09.643 "num_base_bdevs_operational": 4, 00:15:09.643 "base_bdevs_list": [ 00:15:09.643 { 00:15:09.643 "name": "pt1", 00:15:09.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.643 "is_configured": true, 00:15:09.643 "data_offset": 2048, 00:15:09.643 "data_size": 63488 00:15:09.643 }, 00:15:09.643 { 00:15:09.643 "name": "pt2", 00:15:09.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.643 "is_configured": true, 00:15:09.643 "data_offset": 2048, 00:15:09.643 "data_size": 63488 00:15:09.643 }, 00:15:09.643 { 00:15:09.643 "name": "pt3", 00:15:09.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.643 "is_configured": true, 00:15:09.643 "data_offset": 2048, 00:15:09.643 "data_size": 63488 00:15:09.643 }, 00:15:09.643 { 00:15:09.643 "name": "pt4", 00:15:09.643 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.643 "is_configured": true, 00:15:09.643 "data_offset": 2048, 00:15:09.643 "data_size": 63488 00:15:09.643 } 00:15:09.643 ] 00:15:09.643 }' 00:15:09.643 10:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.643 10:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.212 [2024-10-30 10:43:31.406940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:10.212 "name": "raid_bdev1", 00:15:10.212 "aliases": [ 00:15:10.212 "1d7cb55b-6406-4e69-9191-98f8dd80042f" 00:15:10.212 ], 00:15:10.212 "product_name": "Raid Volume", 00:15:10.212 "block_size": 512, 00:15:10.212 "num_blocks": 253952, 00:15:10.212 "uuid": "1d7cb55b-6406-4e69-9191-98f8dd80042f", 00:15:10.212 "assigned_rate_limits": { 00:15:10.212 "rw_ios_per_sec": 0, 00:15:10.212 "rw_mbytes_per_sec": 0, 00:15:10.212 "r_mbytes_per_sec": 0, 00:15:10.212 "w_mbytes_per_sec": 0 00:15:10.212 }, 00:15:10.212 "claimed": false, 00:15:10.212 "zoned": false, 00:15:10.212 "supported_io_types": { 00:15:10.212 "read": true, 00:15:10.212 "write": true, 00:15:10.212 "unmap": true, 00:15:10.212 "flush": true, 00:15:10.212 "reset": true, 00:15:10.212 "nvme_admin": false, 00:15:10.212 "nvme_io": false, 00:15:10.212 "nvme_io_md": false, 00:15:10.212 "write_zeroes": true, 00:15:10.212 "zcopy": false, 00:15:10.212 "get_zone_info": false, 00:15:10.212 "zone_management": false, 00:15:10.212 "zone_append": false, 00:15:10.212 "compare": false, 00:15:10.212 "compare_and_write": false, 00:15:10.212 "abort": false, 00:15:10.212 "seek_hole": false, 00:15:10.212 "seek_data": false, 00:15:10.212 "copy": false, 00:15:10.212 "nvme_iov_md": false 00:15:10.212 }, 00:15:10.212 "memory_domains": [ 00:15:10.212 { 00:15:10.212 "dma_device_id": "system", 00:15:10.212 "dma_device_type": 1 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.212 "dma_device_type": 2 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "dma_device_id": "system", 00:15:10.212 "dma_device_type": 1 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.212 "dma_device_type": 2 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "dma_device_id": "system", 00:15:10.212 "dma_device_type": 1 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.212 "dma_device_type": 2 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "dma_device_id": "system", 00:15:10.212 "dma_device_type": 1 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.212 "dma_device_type": 2 00:15:10.212 } 00:15:10.212 ], 00:15:10.212 "driver_specific": { 00:15:10.212 "raid": { 00:15:10.212 "uuid": "1d7cb55b-6406-4e69-9191-98f8dd80042f", 00:15:10.212 "strip_size_kb": 64, 00:15:10.212 "state": "online", 00:15:10.212 "raid_level": "raid0", 00:15:10.212 "superblock": true, 00:15:10.212 "num_base_bdevs": 4, 00:15:10.212 "num_base_bdevs_discovered": 4, 00:15:10.212 "num_base_bdevs_operational": 4, 00:15:10.212 "base_bdevs_list": [ 00:15:10.212 { 00:15:10.212 "name": "pt1", 00:15:10.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.212 "is_configured": true, 00:15:10.212 "data_offset": 2048, 00:15:10.212 "data_size": 63488 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "name": "pt2", 00:15:10.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.212 "is_configured": true, 00:15:10.212 "data_offset": 2048, 00:15:10.212 "data_size": 63488 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "name": "pt3", 00:15:10.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.212 "is_configured": true, 00:15:10.212 "data_offset": 2048, 00:15:10.212 "data_size": 63488 00:15:10.212 }, 00:15:10.212 { 00:15:10.212 "name": "pt4", 00:15:10.212 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.212 "is_configured": true, 00:15:10.212 "data_offset": 2048, 00:15:10.212 "data_size": 63488 00:15:10.212 } 00:15:10.212 ] 00:15:10.212 } 00:15:10.212 } 00:15:10.212 }' 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:10.212 pt2 00:15:10.212 pt3 00:15:10.212 pt4' 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.212 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:10.471 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.472 [2024-10-30 10:43:31.802964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1d7cb55b-6406-4e69-9191-98f8dd80042f '!=' 1d7cb55b-6406-4e69-9191-98f8dd80042f ']' 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71007 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 71007 ']' 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 71007 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71007 00:15:10.472 killing process with pid 71007 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71007' 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 71007 00:15:10.472 [2024-10-30 10:43:31.875208] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.472 10:43:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 71007 00:15:10.472 [2024-10-30 10:43:31.875307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.472 [2024-10-30 10:43:31.875439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.472 [2024-10-30 10:43:31.875453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:11.041 [2024-10-30 10:43:32.220277] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.977 10:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:11.977 00:15:11.977 real 0m5.890s 00:15:11.977 user 0m8.851s 00:15:11.977 sys 0m0.865s 00:15:11.977 10:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:11.977 10:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.977 ************************************ 00:15:11.977 END TEST raid_superblock_test 00:15:11.977 ************************************ 00:15:11.977 10:43:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:15:11.977 10:43:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:11.977 10:43:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:11.977 10:43:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.977 ************************************ 00:15:11.977 START TEST raid_read_error_test 00:15:11.977 ************************************ 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9A11zlbdaT 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71268 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71268 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71268 ']' 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:11.977 10:43:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.237 [2024-10-30 10:43:33.448276] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:15:12.237 [2024-10-30 10:43:33.448728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71268 ] 00:15:12.237 [2024-10-30 10:43:33.639073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.496 [2024-10-30 10:43:33.796780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.755 [2024-10-30 10:43:34.029875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.755 [2024-10-30 10:43:34.029925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.322 BaseBdev1_malloc 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.322 true 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.322 [2024-10-30 10:43:34.585945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:13.322 [2024-10-30 10:43:34.586032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.322 [2024-10-30 10:43:34.586065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:13.322 [2024-10-30 10:43:34.586084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.322 [2024-10-30 10:43:34.588978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.322 [2024-10-30 10:43:34.589046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.322 BaseBdev1 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.322 BaseBdev2_malloc 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.322 true 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.322 [2024-10-30 10:43:34.648494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:13.322 [2024-10-30 10:43:34.648566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.322 [2024-10-30 10:43:34.648594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:13.322 [2024-10-30 10:43:34.648611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.322 [2024-10-30 10:43:34.651636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.322 [2024-10-30 10:43:34.651849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:13.322 BaseBdev2 00:15:13.322 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.323 BaseBdev3_malloc 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.323 true 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.323 [2024-10-30 10:43:34.722751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:13.323 [2024-10-30 10:43:34.722822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.323 [2024-10-30 10:43:34.722852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:13.323 [2024-10-30 10:43:34.722870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.323 [2024-10-30 10:43:34.725749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.323 [2024-10-30 10:43:34.725805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:13.323 BaseBdev3 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.323 BaseBdev4_malloc 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.323 true 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.323 [2024-10-30 10:43:34.783787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:13.323 [2024-10-30 10:43:34.783865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.323 [2024-10-30 10:43:34.783895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:13.323 [2024-10-30 10:43:34.783920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.323 [2024-10-30 10:43:34.786759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.323 [2024-10-30 10:43:34.786830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:13.323 BaseBdev4 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.323 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.582 [2024-10-30 10:43:34.791853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.582 [2024-10-30 10:43:34.794371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.582 [2024-10-30 10:43:34.794482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.582 [2024-10-30 10:43:34.794596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:13.582 [2024-10-30 10:43:34.795003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:13.582 [2024-10-30 10:43:34.795031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:13.582 [2024-10-30 10:43:34.795368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:13.582 [2024-10-30 10:43:34.795579] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:13.582 [2024-10-30 10:43:34.795597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:13.582 [2024-10-30 10:43:34.795868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.582 "name": "raid_bdev1", 00:15:13.582 "uuid": "0a8f4755-6fd9-4dfd-9de2-42b3aa2f39fe", 00:15:13.582 "strip_size_kb": 64, 00:15:13.582 "state": "online", 00:15:13.582 "raid_level": "raid0", 00:15:13.582 "superblock": true, 00:15:13.582 "num_base_bdevs": 4, 00:15:13.582 "num_base_bdevs_discovered": 4, 00:15:13.582 "num_base_bdevs_operational": 4, 00:15:13.582 "base_bdevs_list": [ 00:15:13.582 { 00:15:13.582 "name": "BaseBdev1", 00:15:13.582 "uuid": "f12e03b4-08d9-5c58-ad57-f186e80d505f", 00:15:13.582 "is_configured": true, 00:15:13.582 "data_offset": 2048, 00:15:13.582 "data_size": 63488 00:15:13.582 }, 00:15:13.582 { 00:15:13.582 "name": "BaseBdev2", 00:15:13.582 "uuid": "5eebc3ec-9ebb-517a-b901-974435bb6c3c", 00:15:13.582 "is_configured": true, 00:15:13.582 "data_offset": 2048, 00:15:13.582 "data_size": 63488 00:15:13.582 }, 00:15:13.582 { 00:15:13.582 "name": "BaseBdev3", 00:15:13.582 "uuid": "d623cc23-2fa1-596b-a6e8-e0e50b2c9437", 00:15:13.582 "is_configured": true, 00:15:13.582 "data_offset": 2048, 00:15:13.582 "data_size": 63488 00:15:13.582 }, 00:15:13.582 { 00:15:13.582 "name": "BaseBdev4", 00:15:13.582 "uuid": "c6bfe7a0-9978-5f68-b0ed-0c9aabb5babb", 00:15:13.582 "is_configured": true, 00:15:13.582 "data_offset": 2048, 00:15:13.582 "data_size": 63488 00:15:13.582 } 00:15:13.582 ] 00:15:13.582 }' 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.582 10:43:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.151 10:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:14.151 10:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:14.151 [2024-10-30 10:43:35.457579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.084 "name": "raid_bdev1", 00:15:15.084 "uuid": "0a8f4755-6fd9-4dfd-9de2-42b3aa2f39fe", 00:15:15.084 "strip_size_kb": 64, 00:15:15.084 "state": "online", 00:15:15.084 "raid_level": "raid0", 00:15:15.084 "superblock": true, 00:15:15.084 "num_base_bdevs": 4, 00:15:15.084 "num_base_bdevs_discovered": 4, 00:15:15.084 "num_base_bdevs_operational": 4, 00:15:15.084 "base_bdevs_list": [ 00:15:15.084 { 00:15:15.084 "name": "BaseBdev1", 00:15:15.084 "uuid": "f12e03b4-08d9-5c58-ad57-f186e80d505f", 00:15:15.084 "is_configured": true, 00:15:15.084 "data_offset": 2048, 00:15:15.084 "data_size": 63488 00:15:15.084 }, 00:15:15.084 { 00:15:15.084 "name": "BaseBdev2", 00:15:15.084 "uuid": "5eebc3ec-9ebb-517a-b901-974435bb6c3c", 00:15:15.084 "is_configured": true, 00:15:15.084 "data_offset": 2048, 00:15:15.084 "data_size": 63488 00:15:15.084 }, 00:15:15.084 { 00:15:15.084 "name": "BaseBdev3", 00:15:15.084 "uuid": "d623cc23-2fa1-596b-a6e8-e0e50b2c9437", 00:15:15.084 "is_configured": true, 00:15:15.084 "data_offset": 2048, 00:15:15.084 "data_size": 63488 00:15:15.084 }, 00:15:15.084 { 00:15:15.084 "name": "BaseBdev4", 00:15:15.084 "uuid": "c6bfe7a0-9978-5f68-b0ed-0c9aabb5babb", 00:15:15.084 "is_configured": true, 00:15:15.084 "data_offset": 2048, 00:15:15.084 "data_size": 63488 00:15:15.084 } 00:15:15.084 ] 00:15:15.084 }' 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.084 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.652 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:15.652 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.652 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.652 [2024-10-30 10:43:36.871918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.652 [2024-10-30 10:43:36.872133] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.652 [2024-10-30 10:43:36.875625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.652 [2024-10-30 10:43:36.875871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.652 [2024-10-30 10:43:36.875950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.652 [2024-10-30 10:43:36.875987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:15.652 { 00:15:15.652 "results": [ 00:15:15.652 { 00:15:15.652 "job": "raid_bdev1", 00:15:15.652 "core_mask": "0x1", 00:15:15.652 "workload": "randrw", 00:15:15.652 "percentage": 50, 00:15:15.652 "status": "finished", 00:15:15.652 "queue_depth": 1, 00:15:15.652 "io_size": 131072, 00:15:15.652 "runtime": 1.411902, 00:15:15.652 "iops": 9961.739554161692, 00:15:15.652 "mibps": 1245.2174442702114, 00:15:15.652 "io_failed": 1, 00:15:15.652 "io_timeout": 0, 00:15:15.653 "avg_latency_us": 140.3159153600558, 00:15:15.653 "min_latency_us": 42.82181818181818, 00:15:15.653 "max_latency_us": 2263.970909090909 00:15:15.653 } 00:15:15.653 ], 00:15:15.653 "core_count": 1 00:15:15.653 } 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71268 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71268 ']' 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71268 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71268 00:15:15.653 killing process with pid 71268 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71268' 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71268 00:15:15.653 [2024-10-30 10:43:36.912915] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.653 10:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71268 00:15:15.911 [2024-10-30 10:43:37.213904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.858 10:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9A11zlbdaT 00:15:16.858 10:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:16.858 10:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:17.117 10:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:17.117 10:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:17.117 ************************************ 00:15:17.117 END TEST raid_read_error_test 00:15:17.117 ************************************ 00:15:17.117 10:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:17.117 10:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:17.117 10:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:17.118 00:15:17.118 real 0m5.006s 00:15:17.118 user 0m6.243s 00:15:17.118 sys 0m0.620s 00:15:17.118 10:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:17.118 10:43:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.118 10:43:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:15:17.118 10:43:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:17.118 10:43:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:17.118 10:43:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.118 ************************************ 00:15:17.118 START TEST raid_write_error_test 00:15:17.118 ************************************ 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.a8uJ2hgpPf 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71415 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71415 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71415 ']' 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:17.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:17.118 10:43:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.118 [2024-10-30 10:43:38.507460] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:15:17.118 [2024-10-30 10:43:38.507634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71415 ] 00:15:17.376 [2024-10-30 10:43:38.693764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.376 [2024-10-30 10:43:38.845297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.634 [2024-10-30 10:43:39.076083] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.634 [2024-10-30 10:43:39.076163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.199 BaseBdev1_malloc 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.199 true 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.199 [2024-10-30 10:43:39.598193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:18.199 [2024-10-30 10:43:39.598256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.199 [2024-10-30 10:43:39.598285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:18.199 [2024-10-30 10:43:39.598304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.199 [2024-10-30 10:43:39.601124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.199 [2024-10-30 10:43:39.601189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.199 BaseBdev1 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.199 BaseBdev2_malloc 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.199 true 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.199 [2024-10-30 10:43:39.658009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:18.199 [2024-10-30 10:43:39.658070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.199 [2024-10-30 10:43:39.658102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:18.199 [2024-10-30 10:43:39.658120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.199 [2024-10-30 10:43:39.660846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.199 [2024-10-30 10:43:39.660894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:18.199 BaseBdev2 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.199 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.458 BaseBdev3_malloc 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.458 true 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.458 [2024-10-30 10:43:39.731406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:18.458 [2024-10-30 10:43:39.731471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.458 [2024-10-30 10:43:39.731498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:18.458 [2024-10-30 10:43:39.731517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.458 [2024-10-30 10:43:39.734273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.458 [2024-10-30 10:43:39.734320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:18.458 BaseBdev3 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.458 BaseBdev4_malloc 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:18.458 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.459 true 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.459 [2024-10-30 10:43:39.791034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:18.459 [2024-10-30 10:43:39.791094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.459 [2024-10-30 10:43:39.791120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:18.459 [2024-10-30 10:43:39.791138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.459 [2024-10-30 10:43:39.793842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.459 [2024-10-30 10:43:39.793891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:18.459 BaseBdev4 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.459 [2024-10-30 10:43:39.803120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.459 [2024-10-30 10:43:39.805533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.459 [2024-10-30 10:43:39.805642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.459 [2024-10-30 10:43:39.805747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:18.459 [2024-10-30 10:43:39.806062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:18.459 [2024-10-30 10:43:39.806088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:18.459 [2024-10-30 10:43:39.806407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:18.459 [2024-10-30 10:43:39.806617] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:18.459 [2024-10-30 10:43:39.806638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:18.459 [2024-10-30 10:43:39.806827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.459 "name": "raid_bdev1", 00:15:18.459 "uuid": "f5d1855c-dd57-49de-9056-8258c305e68d", 00:15:18.459 "strip_size_kb": 64, 00:15:18.459 "state": "online", 00:15:18.459 "raid_level": "raid0", 00:15:18.459 "superblock": true, 00:15:18.459 "num_base_bdevs": 4, 00:15:18.459 "num_base_bdevs_discovered": 4, 00:15:18.459 "num_base_bdevs_operational": 4, 00:15:18.459 "base_bdevs_list": [ 00:15:18.459 { 00:15:18.459 "name": "BaseBdev1", 00:15:18.459 "uuid": "3a52cc47-b765-509e-8c3d-6a7b0c5d82c4", 00:15:18.459 "is_configured": true, 00:15:18.459 "data_offset": 2048, 00:15:18.459 "data_size": 63488 00:15:18.459 }, 00:15:18.459 { 00:15:18.459 "name": "BaseBdev2", 00:15:18.459 "uuid": "5685ef35-4d99-5b90-a6fd-ffebdbec628c", 00:15:18.459 "is_configured": true, 00:15:18.459 "data_offset": 2048, 00:15:18.459 "data_size": 63488 00:15:18.459 }, 00:15:18.459 { 00:15:18.459 "name": "BaseBdev3", 00:15:18.459 "uuid": "30644f27-3eaa-5926-b64d-521f536fafbc", 00:15:18.459 "is_configured": true, 00:15:18.459 "data_offset": 2048, 00:15:18.459 "data_size": 63488 00:15:18.459 }, 00:15:18.459 { 00:15:18.459 "name": "BaseBdev4", 00:15:18.459 "uuid": "e72c1277-4389-56bd-88a1-102bc722c0b2", 00:15:18.459 "is_configured": true, 00:15:18.459 "data_offset": 2048, 00:15:18.459 "data_size": 63488 00:15:18.459 } 00:15:18.459 ] 00:15:18.459 }' 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.459 10:43:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.025 10:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:19.025 10:43:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:19.025 [2024-10-30 10:43:40.444653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.962 "name": "raid_bdev1", 00:15:19.962 "uuid": "f5d1855c-dd57-49de-9056-8258c305e68d", 00:15:19.962 "strip_size_kb": 64, 00:15:19.962 "state": "online", 00:15:19.962 "raid_level": "raid0", 00:15:19.962 "superblock": true, 00:15:19.962 "num_base_bdevs": 4, 00:15:19.962 "num_base_bdevs_discovered": 4, 00:15:19.962 "num_base_bdevs_operational": 4, 00:15:19.962 "base_bdevs_list": [ 00:15:19.962 { 00:15:19.962 "name": "BaseBdev1", 00:15:19.962 "uuid": "3a52cc47-b765-509e-8c3d-6a7b0c5d82c4", 00:15:19.962 "is_configured": true, 00:15:19.962 "data_offset": 2048, 00:15:19.962 "data_size": 63488 00:15:19.962 }, 00:15:19.962 { 00:15:19.962 "name": "BaseBdev2", 00:15:19.962 "uuid": "5685ef35-4d99-5b90-a6fd-ffebdbec628c", 00:15:19.962 "is_configured": true, 00:15:19.962 "data_offset": 2048, 00:15:19.962 "data_size": 63488 00:15:19.962 }, 00:15:19.962 { 00:15:19.962 "name": "BaseBdev3", 00:15:19.962 "uuid": "30644f27-3eaa-5926-b64d-521f536fafbc", 00:15:19.962 "is_configured": true, 00:15:19.962 "data_offset": 2048, 00:15:19.962 "data_size": 63488 00:15:19.962 }, 00:15:19.962 { 00:15:19.962 "name": "BaseBdev4", 00:15:19.962 "uuid": "e72c1277-4389-56bd-88a1-102bc722c0b2", 00:15:19.962 "is_configured": true, 00:15:19.962 "data_offset": 2048, 00:15:19.962 "data_size": 63488 00:15:19.962 } 00:15:19.962 ] 00:15:19.962 }' 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.962 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.530 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.530 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.530 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.530 [2024-10-30 10:43:41.866487] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.530 [2024-10-30 10:43:41.866536] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.530 [2024-10-30 10:43:41.869892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.530 [2024-10-30 10:43:41.869985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.530 [2024-10-30 10:43:41.870047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.530 [2024-10-30 10:43:41.870067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:20.530 { 00:15:20.530 "results": [ 00:15:20.530 { 00:15:20.530 "job": "raid_bdev1", 00:15:20.530 "core_mask": "0x1", 00:15:20.530 "workload": "randrw", 00:15:20.531 "percentage": 50, 00:15:20.531 "status": "finished", 00:15:20.531 "queue_depth": 1, 00:15:20.531 "io_size": 131072, 00:15:20.531 "runtime": 1.419423, 00:15:20.531 "iops": 10679.691677533758, 00:15:20.531 "mibps": 1334.9614596917197, 00:15:20.531 "io_failed": 1, 00:15:20.531 "io_timeout": 0, 00:15:20.531 "avg_latency_us": 130.70999088510433, 00:15:20.531 "min_latency_us": 41.89090909090909, 00:15:20.531 "max_latency_us": 1839.4763636363637 00:15:20.531 } 00:15:20.531 ], 00:15:20.531 "core_count": 1 00:15:20.531 } 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71415 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71415 ']' 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71415 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71415 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:20.531 killing process with pid 71415 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71415' 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71415 00:15:20.531 [2024-10-30 10:43:41.905408] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.531 10:43:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71415 00:15:20.790 [2024-10-30 10:43:42.199168] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.a8uJ2hgpPf 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:15:22.234 00:15:22.234 real 0m4.902s 00:15:22.234 user 0m6.123s 00:15:22.234 sys 0m0.573s 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.234 ************************************ 00:15:22.234 END TEST raid_write_error_test 00:15:22.234 ************************************ 00:15:22.234 10:43:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.234 10:43:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:22.234 10:43:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:15:22.234 10:43:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:22.234 10:43:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.234 10:43:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.234 ************************************ 00:15:22.234 START TEST raid_state_function_test 00:15:22.234 ************************************ 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71563 00:15:22.234 Process raid pid: 71563 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71563' 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71563 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71563 ']' 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.234 10:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.234 [2024-10-30 10:43:43.454548] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:15:22.234 [2024-10-30 10:43:43.454705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.234 [2024-10-30 10:43:43.649151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.493 [2024-10-30 10:43:43.784928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.751 [2024-10-30 10:43:43.990064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.751 [2024-10-30 10:43:43.990104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.319 [2024-10-30 10:43:44.496640] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.319 [2024-10-30 10:43:44.496701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.319 [2024-10-30 10:43:44.496717] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.319 [2024-10-30 10:43:44.496732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.319 [2024-10-30 10:43:44.496742] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.319 [2024-10-30 10:43:44.496755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.319 [2024-10-30 10:43:44.496765] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.319 [2024-10-30 10:43:44.496778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.319 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.319 "name": "Existed_Raid", 00:15:23.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.319 "strip_size_kb": 64, 00:15:23.319 "state": "configuring", 00:15:23.319 "raid_level": "concat", 00:15:23.319 "superblock": false, 00:15:23.319 "num_base_bdevs": 4, 00:15:23.319 "num_base_bdevs_discovered": 0, 00:15:23.319 "num_base_bdevs_operational": 4, 00:15:23.319 "base_bdevs_list": [ 00:15:23.319 { 00:15:23.319 "name": "BaseBdev1", 00:15:23.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.319 "is_configured": false, 00:15:23.319 "data_offset": 0, 00:15:23.319 "data_size": 0 00:15:23.319 }, 00:15:23.319 { 00:15:23.319 "name": "BaseBdev2", 00:15:23.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.319 "is_configured": false, 00:15:23.319 "data_offset": 0, 00:15:23.319 "data_size": 0 00:15:23.319 }, 00:15:23.319 { 00:15:23.319 "name": "BaseBdev3", 00:15:23.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.319 "is_configured": false, 00:15:23.319 "data_offset": 0, 00:15:23.319 "data_size": 0 00:15:23.319 }, 00:15:23.319 { 00:15:23.319 "name": "BaseBdev4", 00:15:23.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.319 "is_configured": false, 00:15:23.319 "data_offset": 0, 00:15:23.319 "data_size": 0 00:15:23.319 } 00:15:23.320 ] 00:15:23.320 }' 00:15:23.320 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.320 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.578 10:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.578 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.578 10:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.578 [2024-10-30 10:43:45.004690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.578 [2024-10-30 10:43:45.004743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:23.578 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.578 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.578 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.578 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.578 [2024-10-30 10:43:45.012671] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.578 [2024-10-30 10:43:45.012721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.578 [2024-10-30 10:43:45.012736] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.579 [2024-10-30 10:43:45.012751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.579 [2024-10-30 10:43:45.012760] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.579 [2024-10-30 10:43:45.012774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.579 [2024-10-30 10:43:45.012783] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.579 [2024-10-30 10:43:45.012796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.579 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.579 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.579 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.579 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.838 [2024-10-30 10:43:45.057930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.838 BaseBdev1 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.838 [ 00:15:23.838 { 00:15:23.838 "name": "BaseBdev1", 00:15:23.838 "aliases": [ 00:15:23.838 "5be1f7d3-ffd8-4ab4-99f8-cfb02df3ef08" 00:15:23.838 ], 00:15:23.838 "product_name": "Malloc disk", 00:15:23.838 "block_size": 512, 00:15:23.838 "num_blocks": 65536, 00:15:23.838 "uuid": "5be1f7d3-ffd8-4ab4-99f8-cfb02df3ef08", 00:15:23.838 "assigned_rate_limits": { 00:15:23.838 "rw_ios_per_sec": 0, 00:15:23.838 "rw_mbytes_per_sec": 0, 00:15:23.838 "r_mbytes_per_sec": 0, 00:15:23.838 "w_mbytes_per_sec": 0 00:15:23.838 }, 00:15:23.838 "claimed": true, 00:15:23.838 "claim_type": "exclusive_write", 00:15:23.838 "zoned": false, 00:15:23.838 "supported_io_types": { 00:15:23.838 "read": true, 00:15:23.838 "write": true, 00:15:23.838 "unmap": true, 00:15:23.838 "flush": true, 00:15:23.838 "reset": true, 00:15:23.838 "nvme_admin": false, 00:15:23.838 "nvme_io": false, 00:15:23.838 "nvme_io_md": false, 00:15:23.838 "write_zeroes": true, 00:15:23.838 "zcopy": true, 00:15:23.838 "get_zone_info": false, 00:15:23.838 "zone_management": false, 00:15:23.838 "zone_append": false, 00:15:23.838 "compare": false, 00:15:23.838 "compare_and_write": false, 00:15:23.838 "abort": true, 00:15:23.838 "seek_hole": false, 00:15:23.838 "seek_data": false, 00:15:23.838 "copy": true, 00:15:23.838 "nvme_iov_md": false 00:15:23.838 }, 00:15:23.838 "memory_domains": [ 00:15:23.838 { 00:15:23.838 "dma_device_id": "system", 00:15:23.838 "dma_device_type": 1 00:15:23.838 }, 00:15:23.838 { 00:15:23.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.838 "dma_device_type": 2 00:15:23.838 } 00:15:23.838 ], 00:15:23.838 "driver_specific": {} 00:15:23.838 } 00:15:23.838 ] 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.838 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.838 "name": "Existed_Raid", 00:15:23.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.838 "strip_size_kb": 64, 00:15:23.838 "state": "configuring", 00:15:23.838 "raid_level": "concat", 00:15:23.838 "superblock": false, 00:15:23.838 "num_base_bdevs": 4, 00:15:23.838 "num_base_bdevs_discovered": 1, 00:15:23.838 "num_base_bdevs_operational": 4, 00:15:23.838 "base_bdevs_list": [ 00:15:23.838 { 00:15:23.838 "name": "BaseBdev1", 00:15:23.838 "uuid": "5be1f7d3-ffd8-4ab4-99f8-cfb02df3ef08", 00:15:23.838 "is_configured": true, 00:15:23.838 "data_offset": 0, 00:15:23.838 "data_size": 65536 00:15:23.838 }, 00:15:23.838 { 00:15:23.839 "name": "BaseBdev2", 00:15:23.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.839 "is_configured": false, 00:15:23.839 "data_offset": 0, 00:15:23.839 "data_size": 0 00:15:23.839 }, 00:15:23.839 { 00:15:23.839 "name": "BaseBdev3", 00:15:23.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.839 "is_configured": false, 00:15:23.839 "data_offset": 0, 00:15:23.839 "data_size": 0 00:15:23.839 }, 00:15:23.839 { 00:15:23.839 "name": "BaseBdev4", 00:15:23.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.839 "is_configured": false, 00:15:23.839 "data_offset": 0, 00:15:23.839 "data_size": 0 00:15:23.839 } 00:15:23.839 ] 00:15:23.839 }' 00:15:23.839 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.839 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.407 [2024-10-30 10:43:45.622172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.407 [2024-10-30 10:43:45.622238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.407 [2024-10-30 10:43:45.630232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.407 [2024-10-30 10:43:45.632873] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.407 [2024-10-30 10:43:45.632930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.407 [2024-10-30 10:43:45.632946] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.407 [2024-10-30 10:43:45.632963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.407 [2024-10-30 10:43:45.632987] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.407 [2024-10-30 10:43:45.633004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.407 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.407 "name": "Existed_Raid", 00:15:24.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.407 "strip_size_kb": 64, 00:15:24.407 "state": "configuring", 00:15:24.407 "raid_level": "concat", 00:15:24.407 "superblock": false, 00:15:24.407 "num_base_bdevs": 4, 00:15:24.407 "num_base_bdevs_discovered": 1, 00:15:24.407 "num_base_bdevs_operational": 4, 00:15:24.407 "base_bdevs_list": [ 00:15:24.407 { 00:15:24.407 "name": "BaseBdev1", 00:15:24.407 "uuid": "5be1f7d3-ffd8-4ab4-99f8-cfb02df3ef08", 00:15:24.407 "is_configured": true, 00:15:24.407 "data_offset": 0, 00:15:24.407 "data_size": 65536 00:15:24.407 }, 00:15:24.407 { 00:15:24.407 "name": "BaseBdev2", 00:15:24.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.407 "is_configured": false, 00:15:24.407 "data_offset": 0, 00:15:24.407 "data_size": 0 00:15:24.407 }, 00:15:24.407 { 00:15:24.407 "name": "BaseBdev3", 00:15:24.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.407 "is_configured": false, 00:15:24.407 "data_offset": 0, 00:15:24.407 "data_size": 0 00:15:24.407 }, 00:15:24.407 { 00:15:24.407 "name": "BaseBdev4", 00:15:24.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.407 "is_configured": false, 00:15:24.407 "data_offset": 0, 00:15:24.407 "data_size": 0 00:15:24.407 } 00:15:24.408 ] 00:15:24.408 }' 00:15:24.408 10:43:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.408 10:43:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.975 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.975 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.975 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.975 [2024-10-30 10:43:46.180880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.975 BaseBdev2 00:15:24.975 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.975 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.976 [ 00:15:24.976 { 00:15:24.976 "name": "BaseBdev2", 00:15:24.976 "aliases": [ 00:15:24.976 "d50823e4-f2eb-4773-827d-e844bf94d3b1" 00:15:24.976 ], 00:15:24.976 "product_name": "Malloc disk", 00:15:24.976 "block_size": 512, 00:15:24.976 "num_blocks": 65536, 00:15:24.976 "uuid": "d50823e4-f2eb-4773-827d-e844bf94d3b1", 00:15:24.976 "assigned_rate_limits": { 00:15:24.976 "rw_ios_per_sec": 0, 00:15:24.976 "rw_mbytes_per_sec": 0, 00:15:24.976 "r_mbytes_per_sec": 0, 00:15:24.976 "w_mbytes_per_sec": 0 00:15:24.976 }, 00:15:24.976 "claimed": true, 00:15:24.976 "claim_type": "exclusive_write", 00:15:24.976 "zoned": false, 00:15:24.976 "supported_io_types": { 00:15:24.976 "read": true, 00:15:24.976 "write": true, 00:15:24.976 "unmap": true, 00:15:24.976 "flush": true, 00:15:24.976 "reset": true, 00:15:24.976 "nvme_admin": false, 00:15:24.976 "nvme_io": false, 00:15:24.976 "nvme_io_md": false, 00:15:24.976 "write_zeroes": true, 00:15:24.976 "zcopy": true, 00:15:24.976 "get_zone_info": false, 00:15:24.976 "zone_management": false, 00:15:24.976 "zone_append": false, 00:15:24.976 "compare": false, 00:15:24.976 "compare_and_write": false, 00:15:24.976 "abort": true, 00:15:24.976 "seek_hole": false, 00:15:24.976 "seek_data": false, 00:15:24.976 "copy": true, 00:15:24.976 "nvme_iov_md": false 00:15:24.976 }, 00:15:24.976 "memory_domains": [ 00:15:24.976 { 00:15:24.976 "dma_device_id": "system", 00:15:24.976 "dma_device_type": 1 00:15:24.976 }, 00:15:24.976 { 00:15:24.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.976 "dma_device_type": 2 00:15:24.976 } 00:15:24.976 ], 00:15:24.976 "driver_specific": {} 00:15:24.976 } 00:15:24.976 ] 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.976 "name": "Existed_Raid", 00:15:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.976 "strip_size_kb": 64, 00:15:24.976 "state": "configuring", 00:15:24.976 "raid_level": "concat", 00:15:24.976 "superblock": false, 00:15:24.976 "num_base_bdevs": 4, 00:15:24.976 "num_base_bdevs_discovered": 2, 00:15:24.976 "num_base_bdevs_operational": 4, 00:15:24.976 "base_bdevs_list": [ 00:15:24.976 { 00:15:24.976 "name": "BaseBdev1", 00:15:24.976 "uuid": "5be1f7d3-ffd8-4ab4-99f8-cfb02df3ef08", 00:15:24.976 "is_configured": true, 00:15:24.976 "data_offset": 0, 00:15:24.976 "data_size": 65536 00:15:24.976 }, 00:15:24.976 { 00:15:24.976 "name": "BaseBdev2", 00:15:24.976 "uuid": "d50823e4-f2eb-4773-827d-e844bf94d3b1", 00:15:24.976 "is_configured": true, 00:15:24.976 "data_offset": 0, 00:15:24.976 "data_size": 65536 00:15:24.976 }, 00:15:24.976 { 00:15:24.976 "name": "BaseBdev3", 00:15:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.976 "is_configured": false, 00:15:24.976 "data_offset": 0, 00:15:24.976 "data_size": 0 00:15:24.976 }, 00:15:24.976 { 00:15:24.976 "name": "BaseBdev4", 00:15:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.976 "is_configured": false, 00:15:24.976 "data_offset": 0, 00:15:24.976 "data_size": 0 00:15:24.976 } 00:15:24.976 ] 00:15:24.976 }' 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.976 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.235 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.235 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.235 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.495 [2024-10-30 10:43:46.747927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.495 BaseBdev3 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.495 [ 00:15:25.495 { 00:15:25.495 "name": "BaseBdev3", 00:15:25.495 "aliases": [ 00:15:25.495 "ef2c416d-2b7d-401a-b8ff-e1984591da91" 00:15:25.495 ], 00:15:25.495 "product_name": "Malloc disk", 00:15:25.495 "block_size": 512, 00:15:25.495 "num_blocks": 65536, 00:15:25.495 "uuid": "ef2c416d-2b7d-401a-b8ff-e1984591da91", 00:15:25.495 "assigned_rate_limits": { 00:15:25.495 "rw_ios_per_sec": 0, 00:15:25.495 "rw_mbytes_per_sec": 0, 00:15:25.495 "r_mbytes_per_sec": 0, 00:15:25.495 "w_mbytes_per_sec": 0 00:15:25.495 }, 00:15:25.495 "claimed": true, 00:15:25.495 "claim_type": "exclusive_write", 00:15:25.495 "zoned": false, 00:15:25.495 "supported_io_types": { 00:15:25.495 "read": true, 00:15:25.495 "write": true, 00:15:25.495 "unmap": true, 00:15:25.495 "flush": true, 00:15:25.495 "reset": true, 00:15:25.495 "nvme_admin": false, 00:15:25.495 "nvme_io": false, 00:15:25.495 "nvme_io_md": false, 00:15:25.495 "write_zeroes": true, 00:15:25.495 "zcopy": true, 00:15:25.495 "get_zone_info": false, 00:15:25.495 "zone_management": false, 00:15:25.495 "zone_append": false, 00:15:25.495 "compare": false, 00:15:25.495 "compare_and_write": false, 00:15:25.495 "abort": true, 00:15:25.495 "seek_hole": false, 00:15:25.495 "seek_data": false, 00:15:25.495 "copy": true, 00:15:25.495 "nvme_iov_md": false 00:15:25.495 }, 00:15:25.495 "memory_domains": [ 00:15:25.495 { 00:15:25.495 "dma_device_id": "system", 00:15:25.495 "dma_device_type": 1 00:15:25.495 }, 00:15:25.495 { 00:15:25.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.495 "dma_device_type": 2 00:15:25.495 } 00:15:25.495 ], 00:15:25.495 "driver_specific": {} 00:15:25.495 } 00:15:25.495 ] 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.495 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.495 "name": "Existed_Raid", 00:15:25.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.495 "strip_size_kb": 64, 00:15:25.495 "state": "configuring", 00:15:25.495 "raid_level": "concat", 00:15:25.495 "superblock": false, 00:15:25.495 "num_base_bdevs": 4, 00:15:25.495 "num_base_bdevs_discovered": 3, 00:15:25.495 "num_base_bdevs_operational": 4, 00:15:25.496 "base_bdevs_list": [ 00:15:25.496 { 00:15:25.496 "name": "BaseBdev1", 00:15:25.496 "uuid": "5be1f7d3-ffd8-4ab4-99f8-cfb02df3ef08", 00:15:25.496 "is_configured": true, 00:15:25.496 "data_offset": 0, 00:15:25.496 "data_size": 65536 00:15:25.496 }, 00:15:25.496 { 00:15:25.496 "name": "BaseBdev2", 00:15:25.496 "uuid": "d50823e4-f2eb-4773-827d-e844bf94d3b1", 00:15:25.496 "is_configured": true, 00:15:25.496 "data_offset": 0, 00:15:25.496 "data_size": 65536 00:15:25.496 }, 00:15:25.496 { 00:15:25.496 "name": "BaseBdev3", 00:15:25.496 "uuid": "ef2c416d-2b7d-401a-b8ff-e1984591da91", 00:15:25.496 "is_configured": true, 00:15:25.496 "data_offset": 0, 00:15:25.496 "data_size": 65536 00:15:25.496 }, 00:15:25.496 { 00:15:25.496 "name": "BaseBdev4", 00:15:25.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.496 "is_configured": false, 00:15:25.496 "data_offset": 0, 00:15:25.496 "data_size": 0 00:15:25.496 } 00:15:25.496 ] 00:15:25.496 }' 00:15:25.496 10:43:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.496 10:43:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.063 [2024-10-30 10:43:47.335460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.063 [2024-10-30 10:43:47.335520] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:26.063 [2024-10-30 10:43:47.335533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:26.063 [2024-10-30 10:43:47.335876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:26.063 [2024-10-30 10:43:47.336126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:26.063 [2024-10-30 10:43:47.336159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:26.063 [2024-10-30 10:43:47.336466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.063 BaseBdev4 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.063 [ 00:15:26.063 { 00:15:26.063 "name": "BaseBdev4", 00:15:26.063 "aliases": [ 00:15:26.063 "6a399698-cdc0-47aa-a275-7788fcb4452a" 00:15:26.063 ], 00:15:26.063 "product_name": "Malloc disk", 00:15:26.063 "block_size": 512, 00:15:26.063 "num_blocks": 65536, 00:15:26.063 "uuid": "6a399698-cdc0-47aa-a275-7788fcb4452a", 00:15:26.063 "assigned_rate_limits": { 00:15:26.063 "rw_ios_per_sec": 0, 00:15:26.063 "rw_mbytes_per_sec": 0, 00:15:26.063 "r_mbytes_per_sec": 0, 00:15:26.063 "w_mbytes_per_sec": 0 00:15:26.063 }, 00:15:26.063 "claimed": true, 00:15:26.063 "claim_type": "exclusive_write", 00:15:26.063 "zoned": false, 00:15:26.063 "supported_io_types": { 00:15:26.063 "read": true, 00:15:26.063 "write": true, 00:15:26.063 "unmap": true, 00:15:26.063 "flush": true, 00:15:26.063 "reset": true, 00:15:26.063 "nvme_admin": false, 00:15:26.063 "nvme_io": false, 00:15:26.063 "nvme_io_md": false, 00:15:26.063 "write_zeroes": true, 00:15:26.063 "zcopy": true, 00:15:26.063 "get_zone_info": false, 00:15:26.063 "zone_management": false, 00:15:26.063 "zone_append": false, 00:15:26.063 "compare": false, 00:15:26.063 "compare_and_write": false, 00:15:26.063 "abort": true, 00:15:26.063 "seek_hole": false, 00:15:26.063 "seek_data": false, 00:15:26.063 "copy": true, 00:15:26.063 "nvme_iov_md": false 00:15:26.063 }, 00:15:26.063 "memory_domains": [ 00:15:26.063 { 00:15:26.063 "dma_device_id": "system", 00:15:26.063 "dma_device_type": 1 00:15:26.063 }, 00:15:26.063 { 00:15:26.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.063 "dma_device_type": 2 00:15:26.063 } 00:15:26.063 ], 00:15:26.063 "driver_specific": {} 00:15:26.063 } 00:15:26.063 ] 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.063 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.064 "name": "Existed_Raid", 00:15:26.064 "uuid": "48eb3810-ef08-4ccd-8d8b-d029d2e5a467", 00:15:26.064 "strip_size_kb": 64, 00:15:26.064 "state": "online", 00:15:26.064 "raid_level": "concat", 00:15:26.064 "superblock": false, 00:15:26.064 "num_base_bdevs": 4, 00:15:26.064 "num_base_bdevs_discovered": 4, 00:15:26.064 "num_base_bdevs_operational": 4, 00:15:26.064 "base_bdevs_list": [ 00:15:26.064 { 00:15:26.064 "name": "BaseBdev1", 00:15:26.064 "uuid": "5be1f7d3-ffd8-4ab4-99f8-cfb02df3ef08", 00:15:26.064 "is_configured": true, 00:15:26.064 "data_offset": 0, 00:15:26.064 "data_size": 65536 00:15:26.064 }, 00:15:26.064 { 00:15:26.064 "name": "BaseBdev2", 00:15:26.064 "uuid": "d50823e4-f2eb-4773-827d-e844bf94d3b1", 00:15:26.064 "is_configured": true, 00:15:26.064 "data_offset": 0, 00:15:26.064 "data_size": 65536 00:15:26.064 }, 00:15:26.064 { 00:15:26.064 "name": "BaseBdev3", 00:15:26.064 "uuid": "ef2c416d-2b7d-401a-b8ff-e1984591da91", 00:15:26.064 "is_configured": true, 00:15:26.064 "data_offset": 0, 00:15:26.064 "data_size": 65536 00:15:26.064 }, 00:15:26.064 { 00:15:26.064 "name": "BaseBdev4", 00:15:26.064 "uuid": "6a399698-cdc0-47aa-a275-7788fcb4452a", 00:15:26.064 "is_configured": true, 00:15:26.064 "data_offset": 0, 00:15:26.064 "data_size": 65536 00:15:26.064 } 00:15:26.064 ] 00:15:26.064 }' 00:15:26.064 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.064 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.633 [2024-10-30 10:43:47.908131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.633 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.633 "name": "Existed_Raid", 00:15:26.633 "aliases": [ 00:15:26.633 "48eb3810-ef08-4ccd-8d8b-d029d2e5a467" 00:15:26.633 ], 00:15:26.633 "product_name": "Raid Volume", 00:15:26.633 "block_size": 512, 00:15:26.633 "num_blocks": 262144, 00:15:26.633 "uuid": "48eb3810-ef08-4ccd-8d8b-d029d2e5a467", 00:15:26.633 "assigned_rate_limits": { 00:15:26.633 "rw_ios_per_sec": 0, 00:15:26.633 "rw_mbytes_per_sec": 0, 00:15:26.633 "r_mbytes_per_sec": 0, 00:15:26.633 "w_mbytes_per_sec": 0 00:15:26.633 }, 00:15:26.633 "claimed": false, 00:15:26.633 "zoned": false, 00:15:26.633 "supported_io_types": { 00:15:26.633 "read": true, 00:15:26.633 "write": true, 00:15:26.633 "unmap": true, 00:15:26.633 "flush": true, 00:15:26.633 "reset": true, 00:15:26.633 "nvme_admin": false, 00:15:26.633 "nvme_io": false, 00:15:26.633 "nvme_io_md": false, 00:15:26.633 "write_zeroes": true, 00:15:26.633 "zcopy": false, 00:15:26.633 "get_zone_info": false, 00:15:26.633 "zone_management": false, 00:15:26.633 "zone_append": false, 00:15:26.633 "compare": false, 00:15:26.633 "compare_and_write": false, 00:15:26.633 "abort": false, 00:15:26.633 "seek_hole": false, 00:15:26.633 "seek_data": false, 00:15:26.633 "copy": false, 00:15:26.633 "nvme_iov_md": false 00:15:26.633 }, 00:15:26.633 "memory_domains": [ 00:15:26.633 { 00:15:26.633 "dma_device_id": "system", 00:15:26.633 "dma_device_type": 1 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.633 "dma_device_type": 2 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "dma_device_id": "system", 00:15:26.633 "dma_device_type": 1 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.633 "dma_device_type": 2 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "dma_device_id": "system", 00:15:26.633 "dma_device_type": 1 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.633 "dma_device_type": 2 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "dma_device_id": "system", 00:15:26.633 "dma_device_type": 1 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.633 "dma_device_type": 2 00:15:26.633 } 00:15:26.633 ], 00:15:26.633 "driver_specific": { 00:15:26.633 "raid": { 00:15:26.633 "uuid": "48eb3810-ef08-4ccd-8d8b-d029d2e5a467", 00:15:26.633 "strip_size_kb": 64, 00:15:26.633 "state": "online", 00:15:26.633 "raid_level": "concat", 00:15:26.633 "superblock": false, 00:15:26.633 "num_base_bdevs": 4, 00:15:26.633 "num_base_bdevs_discovered": 4, 00:15:26.633 "num_base_bdevs_operational": 4, 00:15:26.633 "base_bdevs_list": [ 00:15:26.633 { 00:15:26.633 "name": "BaseBdev1", 00:15:26.633 "uuid": "5be1f7d3-ffd8-4ab4-99f8-cfb02df3ef08", 00:15:26.633 "is_configured": true, 00:15:26.633 "data_offset": 0, 00:15:26.633 "data_size": 65536 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "name": "BaseBdev2", 00:15:26.633 "uuid": "d50823e4-f2eb-4773-827d-e844bf94d3b1", 00:15:26.633 "is_configured": true, 00:15:26.633 "data_offset": 0, 00:15:26.633 "data_size": 65536 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "name": "BaseBdev3", 00:15:26.633 "uuid": "ef2c416d-2b7d-401a-b8ff-e1984591da91", 00:15:26.633 "is_configured": true, 00:15:26.633 "data_offset": 0, 00:15:26.633 "data_size": 65536 00:15:26.633 }, 00:15:26.633 { 00:15:26.633 "name": "BaseBdev4", 00:15:26.633 "uuid": "6a399698-cdc0-47aa-a275-7788fcb4452a", 00:15:26.633 "is_configured": true, 00:15:26.633 "data_offset": 0, 00:15:26.633 "data_size": 65536 00:15:26.633 } 00:15:26.633 ] 00:15:26.633 } 00:15:26.633 } 00:15:26.633 }' 00:15:26.634 10:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.634 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:26.634 BaseBdev2 00:15:26.634 BaseBdev3 00:15:26.634 BaseBdev4' 00:15:26.634 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.634 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.634 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.634 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:26.634 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.634 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.634 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.634 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.916 [2024-10-30 10:43:48.283868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.916 [2024-10-30 10:43:48.283912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.916 [2024-10-30 10:43:48.283991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.916 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.174 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.174 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.174 "name": "Existed_Raid", 00:15:27.174 "uuid": "48eb3810-ef08-4ccd-8d8b-d029d2e5a467", 00:15:27.174 "strip_size_kb": 64, 00:15:27.174 "state": "offline", 00:15:27.174 "raid_level": "concat", 00:15:27.174 "superblock": false, 00:15:27.174 "num_base_bdevs": 4, 00:15:27.174 "num_base_bdevs_discovered": 3, 00:15:27.174 "num_base_bdevs_operational": 3, 00:15:27.174 "base_bdevs_list": [ 00:15:27.174 { 00:15:27.174 "name": null, 00:15:27.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.174 "is_configured": false, 00:15:27.174 "data_offset": 0, 00:15:27.174 "data_size": 65536 00:15:27.174 }, 00:15:27.174 { 00:15:27.174 "name": "BaseBdev2", 00:15:27.174 "uuid": "d50823e4-f2eb-4773-827d-e844bf94d3b1", 00:15:27.174 "is_configured": true, 00:15:27.174 "data_offset": 0, 00:15:27.174 "data_size": 65536 00:15:27.174 }, 00:15:27.174 { 00:15:27.174 "name": "BaseBdev3", 00:15:27.174 "uuid": "ef2c416d-2b7d-401a-b8ff-e1984591da91", 00:15:27.174 "is_configured": true, 00:15:27.174 "data_offset": 0, 00:15:27.174 "data_size": 65536 00:15:27.174 }, 00:15:27.174 { 00:15:27.174 "name": "BaseBdev4", 00:15:27.174 "uuid": "6a399698-cdc0-47aa-a275-7788fcb4452a", 00:15:27.174 "is_configured": true, 00:15:27.174 "data_offset": 0, 00:15:27.174 "data_size": 65536 00:15:27.174 } 00:15:27.174 ] 00:15:27.174 }' 00:15:27.174 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.174 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.433 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:27.433 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.433 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.433 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.433 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.433 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.692 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.692 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.692 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.692 10:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:27.692 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.692 10:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.692 [2024-10-30 10:43:48.936687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.692 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.692 [2024-10-30 10:43:49.083862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.952 [2024-10-30 10:43:49.236427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:27.952 [2024-10-30 10:43:49.236509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.952 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.212 BaseBdev2 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.212 [ 00:15:28.212 { 00:15:28.212 "name": "BaseBdev2", 00:15:28.212 "aliases": [ 00:15:28.212 "efa6e759-6c6a-4e3e-91dd-7635ea164971" 00:15:28.212 ], 00:15:28.212 "product_name": "Malloc disk", 00:15:28.212 "block_size": 512, 00:15:28.212 "num_blocks": 65536, 00:15:28.212 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:28.212 "assigned_rate_limits": { 00:15:28.212 "rw_ios_per_sec": 0, 00:15:28.212 "rw_mbytes_per_sec": 0, 00:15:28.212 "r_mbytes_per_sec": 0, 00:15:28.212 "w_mbytes_per_sec": 0 00:15:28.212 }, 00:15:28.212 "claimed": false, 00:15:28.212 "zoned": false, 00:15:28.212 "supported_io_types": { 00:15:28.212 "read": true, 00:15:28.212 "write": true, 00:15:28.212 "unmap": true, 00:15:28.212 "flush": true, 00:15:28.212 "reset": true, 00:15:28.212 "nvme_admin": false, 00:15:28.212 "nvme_io": false, 00:15:28.212 "nvme_io_md": false, 00:15:28.212 "write_zeroes": true, 00:15:28.212 "zcopy": true, 00:15:28.212 "get_zone_info": false, 00:15:28.212 "zone_management": false, 00:15:28.212 "zone_append": false, 00:15:28.212 "compare": false, 00:15:28.212 "compare_and_write": false, 00:15:28.212 "abort": true, 00:15:28.212 "seek_hole": false, 00:15:28.212 "seek_data": false, 00:15:28.212 "copy": true, 00:15:28.212 "nvme_iov_md": false 00:15:28.212 }, 00:15:28.212 "memory_domains": [ 00:15:28.212 { 00:15:28.212 "dma_device_id": "system", 00:15:28.212 "dma_device_type": 1 00:15:28.212 }, 00:15:28.212 { 00:15:28.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.212 "dma_device_type": 2 00:15:28.212 } 00:15:28.212 ], 00:15:28.212 "driver_specific": {} 00:15:28.212 } 00:15:28.212 ] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.212 BaseBdev3 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.212 [ 00:15:28.212 { 00:15:28.212 "name": "BaseBdev3", 00:15:28.212 "aliases": [ 00:15:28.212 "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7" 00:15:28.212 ], 00:15:28.212 "product_name": "Malloc disk", 00:15:28.212 "block_size": 512, 00:15:28.212 "num_blocks": 65536, 00:15:28.212 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:28.212 "assigned_rate_limits": { 00:15:28.212 "rw_ios_per_sec": 0, 00:15:28.212 "rw_mbytes_per_sec": 0, 00:15:28.212 "r_mbytes_per_sec": 0, 00:15:28.212 "w_mbytes_per_sec": 0 00:15:28.212 }, 00:15:28.212 "claimed": false, 00:15:28.212 "zoned": false, 00:15:28.212 "supported_io_types": { 00:15:28.212 "read": true, 00:15:28.212 "write": true, 00:15:28.212 "unmap": true, 00:15:28.212 "flush": true, 00:15:28.212 "reset": true, 00:15:28.212 "nvme_admin": false, 00:15:28.212 "nvme_io": false, 00:15:28.212 "nvme_io_md": false, 00:15:28.212 "write_zeroes": true, 00:15:28.212 "zcopy": true, 00:15:28.212 "get_zone_info": false, 00:15:28.212 "zone_management": false, 00:15:28.212 "zone_append": false, 00:15:28.212 "compare": false, 00:15:28.212 "compare_and_write": false, 00:15:28.212 "abort": true, 00:15:28.212 "seek_hole": false, 00:15:28.212 "seek_data": false, 00:15:28.212 "copy": true, 00:15:28.212 "nvme_iov_md": false 00:15:28.212 }, 00:15:28.212 "memory_domains": [ 00:15:28.212 { 00:15:28.212 "dma_device_id": "system", 00:15:28.212 "dma_device_type": 1 00:15:28.212 }, 00:15:28.212 { 00:15:28.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.212 "dma_device_type": 2 00:15:28.212 } 00:15:28.212 ], 00:15:28.212 "driver_specific": {} 00:15:28.212 } 00:15:28.212 ] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:28.212 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.213 BaseBdev4 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.213 [ 00:15:28.213 { 00:15:28.213 "name": "BaseBdev4", 00:15:28.213 "aliases": [ 00:15:28.213 "45541340-6190-4691-ae70-6e5d34bc7a74" 00:15:28.213 ], 00:15:28.213 "product_name": "Malloc disk", 00:15:28.213 "block_size": 512, 00:15:28.213 "num_blocks": 65536, 00:15:28.213 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:28.213 "assigned_rate_limits": { 00:15:28.213 "rw_ios_per_sec": 0, 00:15:28.213 "rw_mbytes_per_sec": 0, 00:15:28.213 "r_mbytes_per_sec": 0, 00:15:28.213 "w_mbytes_per_sec": 0 00:15:28.213 }, 00:15:28.213 "claimed": false, 00:15:28.213 "zoned": false, 00:15:28.213 "supported_io_types": { 00:15:28.213 "read": true, 00:15:28.213 "write": true, 00:15:28.213 "unmap": true, 00:15:28.213 "flush": true, 00:15:28.213 "reset": true, 00:15:28.213 "nvme_admin": false, 00:15:28.213 "nvme_io": false, 00:15:28.213 "nvme_io_md": false, 00:15:28.213 "write_zeroes": true, 00:15:28.213 "zcopy": true, 00:15:28.213 "get_zone_info": false, 00:15:28.213 "zone_management": false, 00:15:28.213 "zone_append": false, 00:15:28.213 "compare": false, 00:15:28.213 "compare_and_write": false, 00:15:28.213 "abort": true, 00:15:28.213 "seek_hole": false, 00:15:28.213 "seek_data": false, 00:15:28.213 "copy": true, 00:15:28.213 "nvme_iov_md": false 00:15:28.213 }, 00:15:28.213 "memory_domains": [ 00:15:28.213 { 00:15:28.213 "dma_device_id": "system", 00:15:28.213 "dma_device_type": 1 00:15:28.213 }, 00:15:28.213 { 00:15:28.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.213 "dma_device_type": 2 00:15:28.213 } 00:15:28.213 ], 00:15:28.213 "driver_specific": {} 00:15:28.213 } 00:15:28.213 ] 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.213 [2024-10-30 10:43:49.616839] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.213 [2024-10-30 10:43:49.616896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.213 [2024-10-30 10:43:49.616927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.213 [2024-10-30 10:43:49.619687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.213 [2024-10-30 10:43:49.619770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.213 "name": "Existed_Raid", 00:15:28.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.213 "strip_size_kb": 64, 00:15:28.213 "state": "configuring", 00:15:28.213 "raid_level": "concat", 00:15:28.213 "superblock": false, 00:15:28.213 "num_base_bdevs": 4, 00:15:28.213 "num_base_bdevs_discovered": 3, 00:15:28.213 "num_base_bdevs_operational": 4, 00:15:28.213 "base_bdevs_list": [ 00:15:28.213 { 00:15:28.213 "name": "BaseBdev1", 00:15:28.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.213 "is_configured": false, 00:15:28.213 "data_offset": 0, 00:15:28.213 "data_size": 0 00:15:28.213 }, 00:15:28.213 { 00:15:28.213 "name": "BaseBdev2", 00:15:28.213 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:28.213 "is_configured": true, 00:15:28.213 "data_offset": 0, 00:15:28.213 "data_size": 65536 00:15:28.213 }, 00:15:28.213 { 00:15:28.213 "name": "BaseBdev3", 00:15:28.213 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:28.213 "is_configured": true, 00:15:28.213 "data_offset": 0, 00:15:28.213 "data_size": 65536 00:15:28.213 }, 00:15:28.213 { 00:15:28.213 "name": "BaseBdev4", 00:15:28.213 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:28.213 "is_configured": true, 00:15:28.213 "data_offset": 0, 00:15:28.213 "data_size": 65536 00:15:28.213 } 00:15:28.213 ] 00:15:28.213 }' 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.213 10:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.779 [2024-10-30 10:43:50.177042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.779 "name": "Existed_Raid", 00:15:28.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.779 "strip_size_kb": 64, 00:15:28.779 "state": "configuring", 00:15:28.779 "raid_level": "concat", 00:15:28.779 "superblock": false, 00:15:28.779 "num_base_bdevs": 4, 00:15:28.779 "num_base_bdevs_discovered": 2, 00:15:28.779 "num_base_bdevs_operational": 4, 00:15:28.779 "base_bdevs_list": [ 00:15:28.779 { 00:15:28.779 "name": "BaseBdev1", 00:15:28.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.779 "is_configured": false, 00:15:28.779 "data_offset": 0, 00:15:28.779 "data_size": 0 00:15:28.779 }, 00:15:28.779 { 00:15:28.779 "name": null, 00:15:28.779 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:28.779 "is_configured": false, 00:15:28.779 "data_offset": 0, 00:15:28.779 "data_size": 65536 00:15:28.779 }, 00:15:28.779 { 00:15:28.779 "name": "BaseBdev3", 00:15:28.779 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:28.779 "is_configured": true, 00:15:28.779 "data_offset": 0, 00:15:28.779 "data_size": 65536 00:15:28.779 }, 00:15:28.779 { 00:15:28.779 "name": "BaseBdev4", 00:15:28.779 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:28.779 "is_configured": true, 00:15:28.779 "data_offset": 0, 00:15:28.779 "data_size": 65536 00:15:28.779 } 00:15:28.779 ] 00:15:28.779 }' 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.779 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.345 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.345 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.345 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.345 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.345 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.345 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:29.345 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:29.345 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.345 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.604 [2024-10-30 10:43:50.819164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.604 BaseBdev1 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.604 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.604 [ 00:15:29.604 { 00:15:29.604 "name": "BaseBdev1", 00:15:29.604 "aliases": [ 00:15:29.604 "a851b253-2cfc-4641-ac34-32a2fafdaf05" 00:15:29.604 ], 00:15:29.604 "product_name": "Malloc disk", 00:15:29.604 "block_size": 512, 00:15:29.604 "num_blocks": 65536, 00:15:29.604 "uuid": "a851b253-2cfc-4641-ac34-32a2fafdaf05", 00:15:29.604 "assigned_rate_limits": { 00:15:29.604 "rw_ios_per_sec": 0, 00:15:29.604 "rw_mbytes_per_sec": 0, 00:15:29.604 "r_mbytes_per_sec": 0, 00:15:29.604 "w_mbytes_per_sec": 0 00:15:29.604 }, 00:15:29.604 "claimed": true, 00:15:29.604 "claim_type": "exclusive_write", 00:15:29.604 "zoned": false, 00:15:29.604 "supported_io_types": { 00:15:29.604 "read": true, 00:15:29.604 "write": true, 00:15:29.604 "unmap": true, 00:15:29.604 "flush": true, 00:15:29.604 "reset": true, 00:15:29.604 "nvme_admin": false, 00:15:29.604 "nvme_io": false, 00:15:29.604 "nvme_io_md": false, 00:15:29.604 "write_zeroes": true, 00:15:29.604 "zcopy": true, 00:15:29.604 "get_zone_info": false, 00:15:29.604 "zone_management": false, 00:15:29.604 "zone_append": false, 00:15:29.604 "compare": false, 00:15:29.604 "compare_and_write": false, 00:15:29.604 "abort": true, 00:15:29.604 "seek_hole": false, 00:15:29.604 "seek_data": false, 00:15:29.604 "copy": true, 00:15:29.604 "nvme_iov_md": false 00:15:29.604 }, 00:15:29.604 "memory_domains": [ 00:15:29.604 { 00:15:29.604 "dma_device_id": "system", 00:15:29.604 "dma_device_type": 1 00:15:29.604 }, 00:15:29.604 { 00:15:29.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.605 "dma_device_type": 2 00:15:29.605 } 00:15:29.605 ], 00:15:29.605 "driver_specific": {} 00:15:29.605 } 00:15:29.605 ] 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.605 "name": "Existed_Raid", 00:15:29.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.605 "strip_size_kb": 64, 00:15:29.605 "state": "configuring", 00:15:29.605 "raid_level": "concat", 00:15:29.605 "superblock": false, 00:15:29.605 "num_base_bdevs": 4, 00:15:29.605 "num_base_bdevs_discovered": 3, 00:15:29.605 "num_base_bdevs_operational": 4, 00:15:29.605 "base_bdevs_list": [ 00:15:29.605 { 00:15:29.605 "name": "BaseBdev1", 00:15:29.605 "uuid": "a851b253-2cfc-4641-ac34-32a2fafdaf05", 00:15:29.605 "is_configured": true, 00:15:29.605 "data_offset": 0, 00:15:29.605 "data_size": 65536 00:15:29.605 }, 00:15:29.605 { 00:15:29.605 "name": null, 00:15:29.605 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:29.605 "is_configured": false, 00:15:29.605 "data_offset": 0, 00:15:29.605 "data_size": 65536 00:15:29.605 }, 00:15:29.605 { 00:15:29.605 "name": "BaseBdev3", 00:15:29.605 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:29.605 "is_configured": true, 00:15:29.605 "data_offset": 0, 00:15:29.605 "data_size": 65536 00:15:29.605 }, 00:15:29.605 { 00:15:29.605 "name": "BaseBdev4", 00:15:29.605 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:29.605 "is_configured": true, 00:15:29.605 "data_offset": 0, 00:15:29.605 "data_size": 65536 00:15:29.605 } 00:15:29.605 ] 00:15:29.605 }' 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.605 10:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.173 [2024-10-30 10:43:51.407423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.173 "name": "Existed_Raid", 00:15:30.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.173 "strip_size_kb": 64, 00:15:30.173 "state": "configuring", 00:15:30.173 "raid_level": "concat", 00:15:30.173 "superblock": false, 00:15:30.173 "num_base_bdevs": 4, 00:15:30.173 "num_base_bdevs_discovered": 2, 00:15:30.173 "num_base_bdevs_operational": 4, 00:15:30.173 "base_bdevs_list": [ 00:15:30.173 { 00:15:30.173 "name": "BaseBdev1", 00:15:30.173 "uuid": "a851b253-2cfc-4641-ac34-32a2fafdaf05", 00:15:30.173 "is_configured": true, 00:15:30.173 "data_offset": 0, 00:15:30.173 "data_size": 65536 00:15:30.173 }, 00:15:30.173 { 00:15:30.173 "name": null, 00:15:30.173 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:30.173 "is_configured": false, 00:15:30.173 "data_offset": 0, 00:15:30.173 "data_size": 65536 00:15:30.173 }, 00:15:30.173 { 00:15:30.173 "name": null, 00:15:30.173 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:30.173 "is_configured": false, 00:15:30.173 "data_offset": 0, 00:15:30.173 "data_size": 65536 00:15:30.173 }, 00:15:30.173 { 00:15:30.173 "name": "BaseBdev4", 00:15:30.173 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:30.173 "is_configured": true, 00:15:30.173 "data_offset": 0, 00:15:30.173 "data_size": 65536 00:15:30.173 } 00:15:30.173 ] 00:15:30.173 }' 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.173 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.742 [2024-10-30 10:43:51.991590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.742 10:43:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.742 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.742 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.742 "name": "Existed_Raid", 00:15:30.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.742 "strip_size_kb": 64, 00:15:30.742 "state": "configuring", 00:15:30.742 "raid_level": "concat", 00:15:30.742 "superblock": false, 00:15:30.742 "num_base_bdevs": 4, 00:15:30.742 "num_base_bdevs_discovered": 3, 00:15:30.742 "num_base_bdevs_operational": 4, 00:15:30.742 "base_bdevs_list": [ 00:15:30.742 { 00:15:30.742 "name": "BaseBdev1", 00:15:30.742 "uuid": "a851b253-2cfc-4641-ac34-32a2fafdaf05", 00:15:30.742 "is_configured": true, 00:15:30.742 "data_offset": 0, 00:15:30.742 "data_size": 65536 00:15:30.742 }, 00:15:30.742 { 00:15:30.742 "name": null, 00:15:30.742 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:30.742 "is_configured": false, 00:15:30.742 "data_offset": 0, 00:15:30.742 "data_size": 65536 00:15:30.742 }, 00:15:30.742 { 00:15:30.742 "name": "BaseBdev3", 00:15:30.742 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:30.742 "is_configured": true, 00:15:30.742 "data_offset": 0, 00:15:30.742 "data_size": 65536 00:15:30.742 }, 00:15:30.742 { 00:15:30.742 "name": "BaseBdev4", 00:15:30.742 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:30.742 "is_configured": true, 00:15:30.742 "data_offset": 0, 00:15:30.742 "data_size": 65536 00:15:30.742 } 00:15:30.742 ] 00:15:30.742 }' 00:15:30.742 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.742 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.401 [2024-10-30 10:43:52.555755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.401 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.401 "name": "Existed_Raid", 00:15:31.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.401 "strip_size_kb": 64, 00:15:31.401 "state": "configuring", 00:15:31.401 "raid_level": "concat", 00:15:31.402 "superblock": false, 00:15:31.402 "num_base_bdevs": 4, 00:15:31.402 "num_base_bdevs_discovered": 2, 00:15:31.402 "num_base_bdevs_operational": 4, 00:15:31.402 "base_bdevs_list": [ 00:15:31.402 { 00:15:31.402 "name": null, 00:15:31.402 "uuid": "a851b253-2cfc-4641-ac34-32a2fafdaf05", 00:15:31.402 "is_configured": false, 00:15:31.402 "data_offset": 0, 00:15:31.402 "data_size": 65536 00:15:31.402 }, 00:15:31.402 { 00:15:31.402 "name": null, 00:15:31.402 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:31.402 "is_configured": false, 00:15:31.402 "data_offset": 0, 00:15:31.402 "data_size": 65536 00:15:31.402 }, 00:15:31.402 { 00:15:31.402 "name": "BaseBdev3", 00:15:31.402 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:31.402 "is_configured": true, 00:15:31.402 "data_offset": 0, 00:15:31.402 "data_size": 65536 00:15:31.402 }, 00:15:31.402 { 00:15:31.402 "name": "BaseBdev4", 00:15:31.402 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:31.402 "is_configured": true, 00:15:31.402 "data_offset": 0, 00:15:31.402 "data_size": 65536 00:15:31.402 } 00:15:31.402 ] 00:15:31.402 }' 00:15:31.402 10:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.402 10:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.018 [2024-10-30 10:43:53.234838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.018 "name": "Existed_Raid", 00:15:32.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.018 "strip_size_kb": 64, 00:15:32.018 "state": "configuring", 00:15:32.018 "raid_level": "concat", 00:15:32.018 "superblock": false, 00:15:32.018 "num_base_bdevs": 4, 00:15:32.018 "num_base_bdevs_discovered": 3, 00:15:32.018 "num_base_bdevs_operational": 4, 00:15:32.018 "base_bdevs_list": [ 00:15:32.018 { 00:15:32.018 "name": null, 00:15:32.018 "uuid": "a851b253-2cfc-4641-ac34-32a2fafdaf05", 00:15:32.018 "is_configured": false, 00:15:32.018 "data_offset": 0, 00:15:32.018 "data_size": 65536 00:15:32.018 }, 00:15:32.018 { 00:15:32.018 "name": "BaseBdev2", 00:15:32.018 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:32.018 "is_configured": true, 00:15:32.018 "data_offset": 0, 00:15:32.018 "data_size": 65536 00:15:32.018 }, 00:15:32.018 { 00:15:32.018 "name": "BaseBdev3", 00:15:32.018 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:32.018 "is_configured": true, 00:15:32.018 "data_offset": 0, 00:15:32.018 "data_size": 65536 00:15:32.018 }, 00:15:32.018 { 00:15:32.018 "name": "BaseBdev4", 00:15:32.018 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:32.018 "is_configured": true, 00:15:32.018 "data_offset": 0, 00:15:32.018 "data_size": 65536 00:15:32.018 } 00:15:32.018 ] 00:15:32.018 }' 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.018 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a851b253-2cfc-4641-ac34-32a2fafdaf05 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.586 [2024-10-30 10:43:53.924345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:32.586 [2024-10-30 10:43:53.924410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:32.586 [2024-10-30 10:43:53.924422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:32.586 [2024-10-30 10:43:53.924743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:32.586 [2024-10-30 10:43:53.924923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:32.586 [2024-10-30 10:43:53.924945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:32.586 [2024-10-30 10:43:53.925260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.586 NewBaseBdev 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.586 [ 00:15:32.586 { 00:15:32.586 "name": "NewBaseBdev", 00:15:32.586 "aliases": [ 00:15:32.586 "a851b253-2cfc-4641-ac34-32a2fafdaf05" 00:15:32.586 ], 00:15:32.586 "product_name": "Malloc disk", 00:15:32.586 "block_size": 512, 00:15:32.586 "num_blocks": 65536, 00:15:32.586 "uuid": "a851b253-2cfc-4641-ac34-32a2fafdaf05", 00:15:32.586 "assigned_rate_limits": { 00:15:32.586 "rw_ios_per_sec": 0, 00:15:32.586 "rw_mbytes_per_sec": 0, 00:15:32.586 "r_mbytes_per_sec": 0, 00:15:32.586 "w_mbytes_per_sec": 0 00:15:32.586 }, 00:15:32.586 "claimed": true, 00:15:32.586 "claim_type": "exclusive_write", 00:15:32.586 "zoned": false, 00:15:32.586 "supported_io_types": { 00:15:32.586 "read": true, 00:15:32.586 "write": true, 00:15:32.586 "unmap": true, 00:15:32.586 "flush": true, 00:15:32.586 "reset": true, 00:15:32.586 "nvme_admin": false, 00:15:32.586 "nvme_io": false, 00:15:32.586 "nvme_io_md": false, 00:15:32.586 "write_zeroes": true, 00:15:32.586 "zcopy": true, 00:15:32.586 "get_zone_info": false, 00:15:32.586 "zone_management": false, 00:15:32.586 "zone_append": false, 00:15:32.586 "compare": false, 00:15:32.586 "compare_and_write": false, 00:15:32.586 "abort": true, 00:15:32.586 "seek_hole": false, 00:15:32.586 "seek_data": false, 00:15:32.586 "copy": true, 00:15:32.586 "nvme_iov_md": false 00:15:32.586 }, 00:15:32.586 "memory_domains": [ 00:15:32.586 { 00:15:32.586 "dma_device_id": "system", 00:15:32.586 "dma_device_type": 1 00:15:32.586 }, 00:15:32.586 { 00:15:32.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.586 "dma_device_type": 2 00:15:32.586 } 00:15:32.586 ], 00:15:32.586 "driver_specific": {} 00:15:32.586 } 00:15:32.586 ] 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.586 10:43:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.586 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.586 "name": "Existed_Raid", 00:15:32.587 "uuid": "92a7e582-3a78-433f-a3e4-c3506d7b6599", 00:15:32.587 "strip_size_kb": 64, 00:15:32.587 "state": "online", 00:15:32.587 "raid_level": "concat", 00:15:32.587 "superblock": false, 00:15:32.587 "num_base_bdevs": 4, 00:15:32.587 "num_base_bdevs_discovered": 4, 00:15:32.587 "num_base_bdevs_operational": 4, 00:15:32.587 "base_bdevs_list": [ 00:15:32.587 { 00:15:32.587 "name": "NewBaseBdev", 00:15:32.587 "uuid": "a851b253-2cfc-4641-ac34-32a2fafdaf05", 00:15:32.587 "is_configured": true, 00:15:32.587 "data_offset": 0, 00:15:32.587 "data_size": 65536 00:15:32.587 }, 00:15:32.587 { 00:15:32.587 "name": "BaseBdev2", 00:15:32.587 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:32.587 "is_configured": true, 00:15:32.587 "data_offset": 0, 00:15:32.587 "data_size": 65536 00:15:32.587 }, 00:15:32.587 { 00:15:32.587 "name": "BaseBdev3", 00:15:32.587 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:32.587 "is_configured": true, 00:15:32.587 "data_offset": 0, 00:15:32.587 "data_size": 65536 00:15:32.587 }, 00:15:32.587 { 00:15:32.587 "name": "BaseBdev4", 00:15:32.587 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:32.587 "is_configured": true, 00:15:32.587 "data_offset": 0, 00:15:32.587 "data_size": 65536 00:15:32.587 } 00:15:32.587 ] 00:15:32.587 }' 00:15:32.587 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.587 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.151 [2024-10-30 10:43:54.464966] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.151 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.151 "name": "Existed_Raid", 00:15:33.151 "aliases": [ 00:15:33.151 "92a7e582-3a78-433f-a3e4-c3506d7b6599" 00:15:33.151 ], 00:15:33.151 "product_name": "Raid Volume", 00:15:33.151 "block_size": 512, 00:15:33.151 "num_blocks": 262144, 00:15:33.151 "uuid": "92a7e582-3a78-433f-a3e4-c3506d7b6599", 00:15:33.151 "assigned_rate_limits": { 00:15:33.151 "rw_ios_per_sec": 0, 00:15:33.151 "rw_mbytes_per_sec": 0, 00:15:33.151 "r_mbytes_per_sec": 0, 00:15:33.151 "w_mbytes_per_sec": 0 00:15:33.151 }, 00:15:33.151 "claimed": false, 00:15:33.151 "zoned": false, 00:15:33.151 "supported_io_types": { 00:15:33.151 "read": true, 00:15:33.151 "write": true, 00:15:33.151 "unmap": true, 00:15:33.151 "flush": true, 00:15:33.151 "reset": true, 00:15:33.151 "nvme_admin": false, 00:15:33.151 "nvme_io": false, 00:15:33.151 "nvme_io_md": false, 00:15:33.151 "write_zeroes": true, 00:15:33.151 "zcopy": false, 00:15:33.151 "get_zone_info": false, 00:15:33.151 "zone_management": false, 00:15:33.151 "zone_append": false, 00:15:33.151 "compare": false, 00:15:33.151 "compare_and_write": false, 00:15:33.151 "abort": false, 00:15:33.151 "seek_hole": false, 00:15:33.151 "seek_data": false, 00:15:33.151 "copy": false, 00:15:33.151 "nvme_iov_md": false 00:15:33.151 }, 00:15:33.151 "memory_domains": [ 00:15:33.151 { 00:15:33.151 "dma_device_id": "system", 00:15:33.151 "dma_device_type": 1 00:15:33.151 }, 00:15:33.152 { 00:15:33.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.152 "dma_device_type": 2 00:15:33.152 }, 00:15:33.152 { 00:15:33.152 "dma_device_id": "system", 00:15:33.152 "dma_device_type": 1 00:15:33.152 }, 00:15:33.152 { 00:15:33.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.152 "dma_device_type": 2 00:15:33.152 }, 00:15:33.152 { 00:15:33.152 "dma_device_id": "system", 00:15:33.152 "dma_device_type": 1 00:15:33.152 }, 00:15:33.152 { 00:15:33.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.152 "dma_device_type": 2 00:15:33.152 }, 00:15:33.152 { 00:15:33.152 "dma_device_id": "system", 00:15:33.152 "dma_device_type": 1 00:15:33.152 }, 00:15:33.152 { 00:15:33.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.152 "dma_device_type": 2 00:15:33.152 } 00:15:33.152 ], 00:15:33.152 "driver_specific": { 00:15:33.152 "raid": { 00:15:33.152 "uuid": "92a7e582-3a78-433f-a3e4-c3506d7b6599", 00:15:33.152 "strip_size_kb": 64, 00:15:33.152 "state": "online", 00:15:33.152 "raid_level": "concat", 00:15:33.152 "superblock": false, 00:15:33.152 "num_base_bdevs": 4, 00:15:33.152 "num_base_bdevs_discovered": 4, 00:15:33.152 "num_base_bdevs_operational": 4, 00:15:33.152 "base_bdevs_list": [ 00:15:33.152 { 00:15:33.152 "name": "NewBaseBdev", 00:15:33.152 "uuid": "a851b253-2cfc-4641-ac34-32a2fafdaf05", 00:15:33.152 "is_configured": true, 00:15:33.152 "data_offset": 0, 00:15:33.152 "data_size": 65536 00:15:33.152 }, 00:15:33.152 { 00:15:33.152 "name": "BaseBdev2", 00:15:33.152 "uuid": "efa6e759-6c6a-4e3e-91dd-7635ea164971", 00:15:33.152 "is_configured": true, 00:15:33.152 "data_offset": 0, 00:15:33.152 "data_size": 65536 00:15:33.152 }, 00:15:33.152 { 00:15:33.152 "name": "BaseBdev3", 00:15:33.152 "uuid": "7c5dfd30-6468-4b0c-90c7-01afedeb8bd7", 00:15:33.152 "is_configured": true, 00:15:33.152 "data_offset": 0, 00:15:33.152 "data_size": 65536 00:15:33.152 }, 00:15:33.152 { 00:15:33.152 "name": "BaseBdev4", 00:15:33.152 "uuid": "45541340-6190-4691-ae70-6e5d34bc7a74", 00:15:33.152 "is_configured": true, 00:15:33.152 "data_offset": 0, 00:15:33.152 "data_size": 65536 00:15:33.152 } 00:15:33.152 ] 00:15:33.152 } 00:15:33.152 } 00:15:33.152 }' 00:15:33.152 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.152 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:33.152 BaseBdev2 00:15:33.152 BaseBdev3 00:15:33.152 BaseBdev4' 00:15:33.152 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.410 [2024-10-30 10:43:54.856629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.410 [2024-10-30 10:43:54.856665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.410 [2024-10-30 10:43:54.856756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.410 [2024-10-30 10:43:54.856842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.410 [2024-10-30 10:43:54.856858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71563 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71563 ']' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71563 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:33.410 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71563 00:15:33.669 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:33.669 killing process with pid 71563 00:15:33.669 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:33.669 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71563' 00:15:33.669 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71563 00:15:33.669 [2024-10-30 10:43:54.896011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.669 10:43:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71563 00:15:33.928 [2024-10-30 10:43:55.244542] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:34.862 00:15:34.862 real 0m12.923s 00:15:34.862 user 0m21.524s 00:15:34.862 sys 0m1.792s 00:15:34.862 ************************************ 00:15:34.862 END TEST raid_state_function_test 00:15:34.862 ************************************ 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.862 10:43:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:34.862 10:43:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:34.862 10:43:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:34.862 10:43:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:34.862 ************************************ 00:15:34.862 START TEST raid_state_function_test_sb 00:15:34.862 ************************************ 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:34.862 Process raid pid: 72251 00:15:34.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72251 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72251' 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72251 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72251 ']' 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:34.862 10:43:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.120 [2024-10-30 10:43:56.430563] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:15:35.120 [2024-10-30 10:43:56.431041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.377 [2024-10-30 10:43:56.617314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.377 [2024-10-30 10:43:56.744836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.636 [2024-10-30 10:43:56.948638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.636 [2024-10-30 10:43:56.948954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.024 [2024-10-30 10:43:57.399497] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.024 [2024-10-30 10:43:57.399564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.024 [2024-10-30 10:43:57.399582] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.024 [2024-10-30 10:43:57.399599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.024 [2024-10-30 10:43:57.399609] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.024 [2024-10-30 10:43:57.399624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.024 [2024-10-30 10:43:57.399634] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:36.024 [2024-10-30 10:43:57.399648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.024 "name": "Existed_Raid", 00:15:36.024 "uuid": "0e13061f-6dd2-4193-84a0-dbb227bd5c2d", 00:15:36.024 "strip_size_kb": 64, 00:15:36.024 "state": "configuring", 00:15:36.024 "raid_level": "concat", 00:15:36.024 "superblock": true, 00:15:36.024 "num_base_bdevs": 4, 00:15:36.024 "num_base_bdevs_discovered": 0, 00:15:36.024 "num_base_bdevs_operational": 4, 00:15:36.024 "base_bdevs_list": [ 00:15:36.024 { 00:15:36.024 "name": "BaseBdev1", 00:15:36.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.024 "is_configured": false, 00:15:36.024 "data_offset": 0, 00:15:36.024 "data_size": 0 00:15:36.024 }, 00:15:36.024 { 00:15:36.024 "name": "BaseBdev2", 00:15:36.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.024 "is_configured": false, 00:15:36.024 "data_offset": 0, 00:15:36.024 "data_size": 0 00:15:36.024 }, 00:15:36.024 { 00:15:36.024 "name": "BaseBdev3", 00:15:36.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.024 "is_configured": false, 00:15:36.024 "data_offset": 0, 00:15:36.024 "data_size": 0 00:15:36.024 }, 00:15:36.024 { 00:15:36.024 "name": "BaseBdev4", 00:15:36.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.024 "is_configured": false, 00:15:36.024 "data_offset": 0, 00:15:36.024 "data_size": 0 00:15:36.024 } 00:15:36.024 ] 00:15:36.024 }' 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.024 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.591 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:36.591 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.591 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.591 [2024-10-30 10:43:57.943578] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:36.591 [2024-10-30 10:43:57.943624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:36.591 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.592 [2024-10-30 10:43:57.951555] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.592 [2024-10-30 10:43:57.951765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.592 [2024-10-30 10:43:57.951791] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.592 [2024-10-30 10:43:57.951808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.592 [2024-10-30 10:43:57.951818] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.592 [2024-10-30 10:43:57.951833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.592 [2024-10-30 10:43:57.951842] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:36.592 [2024-10-30 10:43:57.951856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.592 [2024-10-30 10:43:57.995965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.592 BaseBdev1 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.592 10:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.592 [ 00:15:36.592 { 00:15:36.592 "name": "BaseBdev1", 00:15:36.592 "aliases": [ 00:15:36.592 "22c1075b-17d2-4841-bac4-4130ea465532" 00:15:36.592 ], 00:15:36.592 "product_name": "Malloc disk", 00:15:36.592 "block_size": 512, 00:15:36.592 "num_blocks": 65536, 00:15:36.592 "uuid": "22c1075b-17d2-4841-bac4-4130ea465532", 00:15:36.592 "assigned_rate_limits": { 00:15:36.592 "rw_ios_per_sec": 0, 00:15:36.592 "rw_mbytes_per_sec": 0, 00:15:36.592 "r_mbytes_per_sec": 0, 00:15:36.592 "w_mbytes_per_sec": 0 00:15:36.592 }, 00:15:36.592 "claimed": true, 00:15:36.592 "claim_type": "exclusive_write", 00:15:36.592 "zoned": false, 00:15:36.592 "supported_io_types": { 00:15:36.592 "read": true, 00:15:36.592 "write": true, 00:15:36.592 "unmap": true, 00:15:36.592 "flush": true, 00:15:36.592 "reset": true, 00:15:36.592 "nvme_admin": false, 00:15:36.592 "nvme_io": false, 00:15:36.592 "nvme_io_md": false, 00:15:36.592 "write_zeroes": true, 00:15:36.592 "zcopy": true, 00:15:36.592 "get_zone_info": false, 00:15:36.592 "zone_management": false, 00:15:36.592 "zone_append": false, 00:15:36.592 "compare": false, 00:15:36.592 "compare_and_write": false, 00:15:36.592 "abort": true, 00:15:36.592 "seek_hole": false, 00:15:36.592 "seek_data": false, 00:15:36.592 "copy": true, 00:15:36.592 "nvme_iov_md": false 00:15:36.592 }, 00:15:36.592 "memory_domains": [ 00:15:36.592 { 00:15:36.592 "dma_device_id": "system", 00:15:36.592 "dma_device_type": 1 00:15:36.592 }, 00:15:36.592 { 00:15:36.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.592 "dma_device_type": 2 00:15:36.592 } 00:15:36.592 ], 00:15:36.592 "driver_specific": {} 00:15:36.592 } 00:15:36.592 ] 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.592 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.851 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.851 "name": "Existed_Raid", 00:15:36.851 "uuid": "9c814d6e-1555-4c6e-9198-5bd6768add0c", 00:15:36.851 "strip_size_kb": 64, 00:15:36.851 "state": "configuring", 00:15:36.851 "raid_level": "concat", 00:15:36.851 "superblock": true, 00:15:36.851 "num_base_bdevs": 4, 00:15:36.851 "num_base_bdevs_discovered": 1, 00:15:36.851 "num_base_bdevs_operational": 4, 00:15:36.851 "base_bdevs_list": [ 00:15:36.851 { 00:15:36.851 "name": "BaseBdev1", 00:15:36.851 "uuid": "22c1075b-17d2-4841-bac4-4130ea465532", 00:15:36.851 "is_configured": true, 00:15:36.851 "data_offset": 2048, 00:15:36.851 "data_size": 63488 00:15:36.851 }, 00:15:36.851 { 00:15:36.851 "name": "BaseBdev2", 00:15:36.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.851 "is_configured": false, 00:15:36.851 "data_offset": 0, 00:15:36.851 "data_size": 0 00:15:36.851 }, 00:15:36.851 { 00:15:36.851 "name": "BaseBdev3", 00:15:36.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.851 "is_configured": false, 00:15:36.851 "data_offset": 0, 00:15:36.851 "data_size": 0 00:15:36.851 }, 00:15:36.851 { 00:15:36.851 "name": "BaseBdev4", 00:15:36.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.851 "is_configured": false, 00:15:36.851 "data_offset": 0, 00:15:36.851 "data_size": 0 00:15:36.851 } 00:15:36.851 ] 00:15:36.851 }' 00:15:36.851 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.851 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.109 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.109 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.109 [2024-10-30 10:43:58.572187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.109 [2024-10-30 10:43:58.572248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:37.109 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.109 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:37.109 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.109 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.367 [2024-10-30 10:43:58.580258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.367 [2024-10-30 10:43:58.582714] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.367 [2024-10-30 10:43:58.582779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.367 [2024-10-30 10:43:58.582794] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.367 [2024-10-30 10:43:58.582810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.367 [2024-10-30 10:43:58.582821] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:37.367 [2024-10-30 10:43:58.582849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.367 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.367 "name": "Existed_Raid", 00:15:37.367 "uuid": "7d13e92d-ce63-49c2-92a3-0ec25c7da67d", 00:15:37.367 "strip_size_kb": 64, 00:15:37.368 "state": "configuring", 00:15:37.368 "raid_level": "concat", 00:15:37.368 "superblock": true, 00:15:37.368 "num_base_bdevs": 4, 00:15:37.368 "num_base_bdevs_discovered": 1, 00:15:37.368 "num_base_bdevs_operational": 4, 00:15:37.368 "base_bdevs_list": [ 00:15:37.368 { 00:15:37.368 "name": "BaseBdev1", 00:15:37.368 "uuid": "22c1075b-17d2-4841-bac4-4130ea465532", 00:15:37.368 "is_configured": true, 00:15:37.368 "data_offset": 2048, 00:15:37.368 "data_size": 63488 00:15:37.368 }, 00:15:37.368 { 00:15:37.368 "name": "BaseBdev2", 00:15:37.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.368 "is_configured": false, 00:15:37.368 "data_offset": 0, 00:15:37.368 "data_size": 0 00:15:37.368 }, 00:15:37.368 { 00:15:37.368 "name": "BaseBdev3", 00:15:37.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.368 "is_configured": false, 00:15:37.368 "data_offset": 0, 00:15:37.368 "data_size": 0 00:15:37.368 }, 00:15:37.368 { 00:15:37.368 "name": "BaseBdev4", 00:15:37.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.368 "is_configured": false, 00:15:37.368 "data_offset": 0, 00:15:37.368 "data_size": 0 00:15:37.368 } 00:15:37.368 ] 00:15:37.368 }' 00:15:37.368 10:43:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.368 10:43:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.626 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.626 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.626 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.884 [2024-10-30 10:43:59.122601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.884 BaseBdev2 00:15:37.884 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.884 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:37.884 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:37.884 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:37.884 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:37.884 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.885 [ 00:15:37.885 { 00:15:37.885 "name": "BaseBdev2", 00:15:37.885 "aliases": [ 00:15:37.885 "cf328334-1e3d-4dd1-b0bf-8f842ff8327a" 00:15:37.885 ], 00:15:37.885 "product_name": "Malloc disk", 00:15:37.885 "block_size": 512, 00:15:37.885 "num_blocks": 65536, 00:15:37.885 "uuid": "cf328334-1e3d-4dd1-b0bf-8f842ff8327a", 00:15:37.885 "assigned_rate_limits": { 00:15:37.885 "rw_ios_per_sec": 0, 00:15:37.885 "rw_mbytes_per_sec": 0, 00:15:37.885 "r_mbytes_per_sec": 0, 00:15:37.885 "w_mbytes_per_sec": 0 00:15:37.885 }, 00:15:37.885 "claimed": true, 00:15:37.885 "claim_type": "exclusive_write", 00:15:37.885 "zoned": false, 00:15:37.885 "supported_io_types": { 00:15:37.885 "read": true, 00:15:37.885 "write": true, 00:15:37.885 "unmap": true, 00:15:37.885 "flush": true, 00:15:37.885 "reset": true, 00:15:37.885 "nvme_admin": false, 00:15:37.885 "nvme_io": false, 00:15:37.885 "nvme_io_md": false, 00:15:37.885 "write_zeroes": true, 00:15:37.885 "zcopy": true, 00:15:37.885 "get_zone_info": false, 00:15:37.885 "zone_management": false, 00:15:37.885 "zone_append": false, 00:15:37.885 "compare": false, 00:15:37.885 "compare_and_write": false, 00:15:37.885 "abort": true, 00:15:37.885 "seek_hole": false, 00:15:37.885 "seek_data": false, 00:15:37.885 "copy": true, 00:15:37.885 "nvme_iov_md": false 00:15:37.885 }, 00:15:37.885 "memory_domains": [ 00:15:37.885 { 00:15:37.885 "dma_device_id": "system", 00:15:37.885 "dma_device_type": 1 00:15:37.885 }, 00:15:37.885 { 00:15:37.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.885 "dma_device_type": 2 00:15:37.885 } 00:15:37.885 ], 00:15:37.885 "driver_specific": {} 00:15:37.885 } 00:15:37.885 ] 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.885 "name": "Existed_Raid", 00:15:37.885 "uuid": "7d13e92d-ce63-49c2-92a3-0ec25c7da67d", 00:15:37.885 "strip_size_kb": 64, 00:15:37.885 "state": "configuring", 00:15:37.885 "raid_level": "concat", 00:15:37.885 "superblock": true, 00:15:37.885 "num_base_bdevs": 4, 00:15:37.885 "num_base_bdevs_discovered": 2, 00:15:37.885 "num_base_bdevs_operational": 4, 00:15:37.885 "base_bdevs_list": [ 00:15:37.885 { 00:15:37.885 "name": "BaseBdev1", 00:15:37.885 "uuid": "22c1075b-17d2-4841-bac4-4130ea465532", 00:15:37.885 "is_configured": true, 00:15:37.885 "data_offset": 2048, 00:15:37.885 "data_size": 63488 00:15:37.885 }, 00:15:37.885 { 00:15:37.885 "name": "BaseBdev2", 00:15:37.885 "uuid": "cf328334-1e3d-4dd1-b0bf-8f842ff8327a", 00:15:37.885 "is_configured": true, 00:15:37.885 "data_offset": 2048, 00:15:37.885 "data_size": 63488 00:15:37.885 }, 00:15:37.885 { 00:15:37.885 "name": "BaseBdev3", 00:15:37.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.885 "is_configured": false, 00:15:37.885 "data_offset": 0, 00:15:37.885 "data_size": 0 00:15:37.885 }, 00:15:37.885 { 00:15:37.885 "name": "BaseBdev4", 00:15:37.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.885 "is_configured": false, 00:15:37.885 "data_offset": 0, 00:15:37.885 "data_size": 0 00:15:37.885 } 00:15:37.885 ] 00:15:37.885 }' 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.885 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.452 [2024-10-30 10:43:59.723683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.452 BaseBdev3 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.452 [ 00:15:38.452 { 00:15:38.452 "name": "BaseBdev3", 00:15:38.452 "aliases": [ 00:15:38.452 "6f8cbe6d-0be3-4324-9a9d-300e72c4ebf1" 00:15:38.452 ], 00:15:38.452 "product_name": "Malloc disk", 00:15:38.452 "block_size": 512, 00:15:38.452 "num_blocks": 65536, 00:15:38.452 "uuid": "6f8cbe6d-0be3-4324-9a9d-300e72c4ebf1", 00:15:38.452 "assigned_rate_limits": { 00:15:38.452 "rw_ios_per_sec": 0, 00:15:38.452 "rw_mbytes_per_sec": 0, 00:15:38.452 "r_mbytes_per_sec": 0, 00:15:38.452 "w_mbytes_per_sec": 0 00:15:38.452 }, 00:15:38.452 "claimed": true, 00:15:38.452 "claim_type": "exclusive_write", 00:15:38.452 "zoned": false, 00:15:38.452 "supported_io_types": { 00:15:38.452 "read": true, 00:15:38.452 "write": true, 00:15:38.452 "unmap": true, 00:15:38.452 "flush": true, 00:15:38.452 "reset": true, 00:15:38.452 "nvme_admin": false, 00:15:38.452 "nvme_io": false, 00:15:38.452 "nvme_io_md": false, 00:15:38.452 "write_zeroes": true, 00:15:38.452 "zcopy": true, 00:15:38.452 "get_zone_info": false, 00:15:38.452 "zone_management": false, 00:15:38.452 "zone_append": false, 00:15:38.452 "compare": false, 00:15:38.452 "compare_and_write": false, 00:15:38.452 "abort": true, 00:15:38.452 "seek_hole": false, 00:15:38.452 "seek_data": false, 00:15:38.452 "copy": true, 00:15:38.452 "nvme_iov_md": false 00:15:38.452 }, 00:15:38.452 "memory_domains": [ 00:15:38.452 { 00:15:38.452 "dma_device_id": "system", 00:15:38.452 "dma_device_type": 1 00:15:38.452 }, 00:15:38.452 { 00:15:38.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.452 "dma_device_type": 2 00:15:38.452 } 00:15:38.452 ], 00:15:38.452 "driver_specific": {} 00:15:38.452 } 00:15:38.452 ] 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.452 "name": "Existed_Raid", 00:15:38.452 "uuid": "7d13e92d-ce63-49c2-92a3-0ec25c7da67d", 00:15:38.452 "strip_size_kb": 64, 00:15:38.452 "state": "configuring", 00:15:38.452 "raid_level": "concat", 00:15:38.452 "superblock": true, 00:15:38.452 "num_base_bdevs": 4, 00:15:38.452 "num_base_bdevs_discovered": 3, 00:15:38.452 "num_base_bdevs_operational": 4, 00:15:38.452 "base_bdevs_list": [ 00:15:38.452 { 00:15:38.452 "name": "BaseBdev1", 00:15:38.452 "uuid": "22c1075b-17d2-4841-bac4-4130ea465532", 00:15:38.452 "is_configured": true, 00:15:38.452 "data_offset": 2048, 00:15:38.452 "data_size": 63488 00:15:38.452 }, 00:15:38.452 { 00:15:38.452 "name": "BaseBdev2", 00:15:38.452 "uuid": "cf328334-1e3d-4dd1-b0bf-8f842ff8327a", 00:15:38.452 "is_configured": true, 00:15:38.452 "data_offset": 2048, 00:15:38.452 "data_size": 63488 00:15:38.452 }, 00:15:38.452 { 00:15:38.452 "name": "BaseBdev3", 00:15:38.452 "uuid": "6f8cbe6d-0be3-4324-9a9d-300e72c4ebf1", 00:15:38.452 "is_configured": true, 00:15:38.452 "data_offset": 2048, 00:15:38.452 "data_size": 63488 00:15:38.452 }, 00:15:38.452 { 00:15:38.452 "name": "BaseBdev4", 00:15:38.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.452 "is_configured": false, 00:15:38.452 "data_offset": 0, 00:15:38.452 "data_size": 0 00:15:38.452 } 00:15:38.452 ] 00:15:38.452 }' 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.452 10:43:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.018 [2024-10-30 10:44:00.286725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:39.018 [2024-10-30 10:44:00.287289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:39.018 [2024-10-30 10:44:00.287315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:39.018 BaseBdev4 00:15:39.018 [2024-10-30 10:44:00.287652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:39.018 [2024-10-30 10:44:00.287848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:39.018 [2024-10-30 10:44:00.287872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:39.018 [2024-10-30 10:44:00.288061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.018 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.018 [ 00:15:39.018 { 00:15:39.018 "name": "BaseBdev4", 00:15:39.018 "aliases": [ 00:15:39.018 "f47e47fd-9a02-4839-8b2d-47b1ce9a875f" 00:15:39.018 ], 00:15:39.018 "product_name": "Malloc disk", 00:15:39.018 "block_size": 512, 00:15:39.018 "num_blocks": 65536, 00:15:39.018 "uuid": "f47e47fd-9a02-4839-8b2d-47b1ce9a875f", 00:15:39.018 "assigned_rate_limits": { 00:15:39.018 "rw_ios_per_sec": 0, 00:15:39.018 "rw_mbytes_per_sec": 0, 00:15:39.018 "r_mbytes_per_sec": 0, 00:15:39.018 "w_mbytes_per_sec": 0 00:15:39.018 }, 00:15:39.018 "claimed": true, 00:15:39.018 "claim_type": "exclusive_write", 00:15:39.018 "zoned": false, 00:15:39.018 "supported_io_types": { 00:15:39.018 "read": true, 00:15:39.018 "write": true, 00:15:39.018 "unmap": true, 00:15:39.018 "flush": true, 00:15:39.018 "reset": true, 00:15:39.018 "nvme_admin": false, 00:15:39.018 "nvme_io": false, 00:15:39.018 "nvme_io_md": false, 00:15:39.018 "write_zeroes": true, 00:15:39.018 "zcopy": true, 00:15:39.018 "get_zone_info": false, 00:15:39.018 "zone_management": false, 00:15:39.018 "zone_append": false, 00:15:39.018 "compare": false, 00:15:39.018 "compare_and_write": false, 00:15:39.018 "abort": true, 00:15:39.018 "seek_hole": false, 00:15:39.018 "seek_data": false, 00:15:39.018 "copy": true, 00:15:39.018 "nvme_iov_md": false 00:15:39.018 }, 00:15:39.019 "memory_domains": [ 00:15:39.019 { 00:15:39.019 "dma_device_id": "system", 00:15:39.019 "dma_device_type": 1 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.019 "dma_device_type": 2 00:15:39.019 } 00:15:39.019 ], 00:15:39.019 "driver_specific": {} 00:15:39.019 } 00:15:39.019 ] 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.019 "name": "Existed_Raid", 00:15:39.019 "uuid": "7d13e92d-ce63-49c2-92a3-0ec25c7da67d", 00:15:39.019 "strip_size_kb": 64, 00:15:39.019 "state": "online", 00:15:39.019 "raid_level": "concat", 00:15:39.019 "superblock": true, 00:15:39.019 "num_base_bdevs": 4, 00:15:39.019 "num_base_bdevs_discovered": 4, 00:15:39.019 "num_base_bdevs_operational": 4, 00:15:39.019 "base_bdevs_list": [ 00:15:39.019 { 00:15:39.019 "name": "BaseBdev1", 00:15:39.019 "uuid": "22c1075b-17d2-4841-bac4-4130ea465532", 00:15:39.019 "is_configured": true, 00:15:39.019 "data_offset": 2048, 00:15:39.019 "data_size": 63488 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "name": "BaseBdev2", 00:15:39.019 "uuid": "cf328334-1e3d-4dd1-b0bf-8f842ff8327a", 00:15:39.019 "is_configured": true, 00:15:39.019 "data_offset": 2048, 00:15:39.019 "data_size": 63488 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "name": "BaseBdev3", 00:15:39.019 "uuid": "6f8cbe6d-0be3-4324-9a9d-300e72c4ebf1", 00:15:39.019 "is_configured": true, 00:15:39.019 "data_offset": 2048, 00:15:39.019 "data_size": 63488 00:15:39.019 }, 00:15:39.019 { 00:15:39.019 "name": "BaseBdev4", 00:15:39.019 "uuid": "f47e47fd-9a02-4839-8b2d-47b1ce9a875f", 00:15:39.019 "is_configured": true, 00:15:39.019 "data_offset": 2048, 00:15:39.019 "data_size": 63488 00:15:39.019 } 00:15:39.019 ] 00:15:39.019 }' 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.019 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.587 [2024-10-30 10:44:00.835447] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:39.587 "name": "Existed_Raid", 00:15:39.587 "aliases": [ 00:15:39.587 "7d13e92d-ce63-49c2-92a3-0ec25c7da67d" 00:15:39.587 ], 00:15:39.587 "product_name": "Raid Volume", 00:15:39.587 "block_size": 512, 00:15:39.587 "num_blocks": 253952, 00:15:39.587 "uuid": "7d13e92d-ce63-49c2-92a3-0ec25c7da67d", 00:15:39.587 "assigned_rate_limits": { 00:15:39.587 "rw_ios_per_sec": 0, 00:15:39.587 "rw_mbytes_per_sec": 0, 00:15:39.587 "r_mbytes_per_sec": 0, 00:15:39.587 "w_mbytes_per_sec": 0 00:15:39.587 }, 00:15:39.587 "claimed": false, 00:15:39.587 "zoned": false, 00:15:39.587 "supported_io_types": { 00:15:39.587 "read": true, 00:15:39.587 "write": true, 00:15:39.587 "unmap": true, 00:15:39.587 "flush": true, 00:15:39.587 "reset": true, 00:15:39.587 "nvme_admin": false, 00:15:39.587 "nvme_io": false, 00:15:39.587 "nvme_io_md": false, 00:15:39.587 "write_zeroes": true, 00:15:39.587 "zcopy": false, 00:15:39.587 "get_zone_info": false, 00:15:39.587 "zone_management": false, 00:15:39.587 "zone_append": false, 00:15:39.587 "compare": false, 00:15:39.587 "compare_and_write": false, 00:15:39.587 "abort": false, 00:15:39.587 "seek_hole": false, 00:15:39.587 "seek_data": false, 00:15:39.587 "copy": false, 00:15:39.587 "nvme_iov_md": false 00:15:39.587 }, 00:15:39.587 "memory_domains": [ 00:15:39.587 { 00:15:39.587 "dma_device_id": "system", 00:15:39.587 "dma_device_type": 1 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.587 "dma_device_type": 2 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "dma_device_id": "system", 00:15:39.587 "dma_device_type": 1 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.587 "dma_device_type": 2 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "dma_device_id": "system", 00:15:39.587 "dma_device_type": 1 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.587 "dma_device_type": 2 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "dma_device_id": "system", 00:15:39.587 "dma_device_type": 1 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.587 "dma_device_type": 2 00:15:39.587 } 00:15:39.587 ], 00:15:39.587 "driver_specific": { 00:15:39.587 "raid": { 00:15:39.587 "uuid": "7d13e92d-ce63-49c2-92a3-0ec25c7da67d", 00:15:39.587 "strip_size_kb": 64, 00:15:39.587 "state": "online", 00:15:39.587 "raid_level": "concat", 00:15:39.587 "superblock": true, 00:15:39.587 "num_base_bdevs": 4, 00:15:39.587 "num_base_bdevs_discovered": 4, 00:15:39.587 "num_base_bdevs_operational": 4, 00:15:39.587 "base_bdevs_list": [ 00:15:39.587 { 00:15:39.587 "name": "BaseBdev1", 00:15:39.587 "uuid": "22c1075b-17d2-4841-bac4-4130ea465532", 00:15:39.587 "is_configured": true, 00:15:39.587 "data_offset": 2048, 00:15:39.587 "data_size": 63488 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "name": "BaseBdev2", 00:15:39.587 "uuid": "cf328334-1e3d-4dd1-b0bf-8f842ff8327a", 00:15:39.587 "is_configured": true, 00:15:39.587 "data_offset": 2048, 00:15:39.587 "data_size": 63488 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "name": "BaseBdev3", 00:15:39.587 "uuid": "6f8cbe6d-0be3-4324-9a9d-300e72c4ebf1", 00:15:39.587 "is_configured": true, 00:15:39.587 "data_offset": 2048, 00:15:39.587 "data_size": 63488 00:15:39.587 }, 00:15:39.587 { 00:15:39.587 "name": "BaseBdev4", 00:15:39.587 "uuid": "f47e47fd-9a02-4839-8b2d-47b1ce9a875f", 00:15:39.587 "is_configured": true, 00:15:39.587 "data_offset": 2048, 00:15:39.587 "data_size": 63488 00:15:39.587 } 00:15:39.587 ] 00:15:39.587 } 00:15:39.587 } 00:15:39.587 }' 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:39.587 BaseBdev2 00:15:39.587 BaseBdev3 00:15:39.587 BaseBdev4' 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.587 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.588 10:44:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.588 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.588 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.588 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.588 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.588 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:39.588 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.588 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.588 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.847 [2024-10-30 10:44:01.207210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.847 [2024-10-30 10:44:01.207252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.847 [2024-10-30 10:44:01.207318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.847 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.848 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.106 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.106 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.106 "name": "Existed_Raid", 00:15:40.106 "uuid": "7d13e92d-ce63-49c2-92a3-0ec25c7da67d", 00:15:40.106 "strip_size_kb": 64, 00:15:40.106 "state": "offline", 00:15:40.106 "raid_level": "concat", 00:15:40.106 "superblock": true, 00:15:40.106 "num_base_bdevs": 4, 00:15:40.106 "num_base_bdevs_discovered": 3, 00:15:40.106 "num_base_bdevs_operational": 3, 00:15:40.106 "base_bdevs_list": [ 00:15:40.106 { 00:15:40.106 "name": null, 00:15:40.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.107 "is_configured": false, 00:15:40.107 "data_offset": 0, 00:15:40.107 "data_size": 63488 00:15:40.107 }, 00:15:40.107 { 00:15:40.107 "name": "BaseBdev2", 00:15:40.107 "uuid": "cf328334-1e3d-4dd1-b0bf-8f842ff8327a", 00:15:40.107 "is_configured": true, 00:15:40.107 "data_offset": 2048, 00:15:40.107 "data_size": 63488 00:15:40.107 }, 00:15:40.107 { 00:15:40.107 "name": "BaseBdev3", 00:15:40.107 "uuid": "6f8cbe6d-0be3-4324-9a9d-300e72c4ebf1", 00:15:40.107 "is_configured": true, 00:15:40.107 "data_offset": 2048, 00:15:40.107 "data_size": 63488 00:15:40.107 }, 00:15:40.107 { 00:15:40.107 "name": "BaseBdev4", 00:15:40.107 "uuid": "f47e47fd-9a02-4839-8b2d-47b1ce9a875f", 00:15:40.107 "is_configured": true, 00:15:40.107 "data_offset": 2048, 00:15:40.107 "data_size": 63488 00:15:40.107 } 00:15:40.107 ] 00:15:40.107 }' 00:15:40.107 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.107 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.673 [2024-10-30 10:44:01.897635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.673 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:40.674 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:40.674 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.674 10:44:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:40.674 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.674 10:44:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.674 [2024-10-30 10:44:02.046861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.674 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.932 [2024-10-30 10:44:02.186771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:40.932 [2024-10-30 10:44:02.186828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.932 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.933 BaseBdev2 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.933 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.933 [ 00:15:40.933 { 00:15:40.933 "name": "BaseBdev2", 00:15:40.933 "aliases": [ 00:15:40.933 "e590f5e8-49e1-4312-a90e-2318c1c74d06" 00:15:40.933 ], 00:15:40.933 "product_name": "Malloc disk", 00:15:40.933 "block_size": 512, 00:15:40.933 "num_blocks": 65536, 00:15:40.933 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:40.933 "assigned_rate_limits": { 00:15:40.933 "rw_ios_per_sec": 0, 00:15:40.933 "rw_mbytes_per_sec": 0, 00:15:40.933 "r_mbytes_per_sec": 0, 00:15:40.933 "w_mbytes_per_sec": 0 00:15:40.933 }, 00:15:40.933 "claimed": false, 00:15:40.933 "zoned": false, 00:15:40.933 "supported_io_types": { 00:15:40.933 "read": true, 00:15:40.933 "write": true, 00:15:40.933 "unmap": true, 00:15:40.933 "flush": true, 00:15:40.933 "reset": true, 00:15:40.933 "nvme_admin": false, 00:15:40.933 "nvme_io": false, 00:15:40.933 "nvme_io_md": false, 00:15:40.933 "write_zeroes": true, 00:15:40.933 "zcopy": true, 00:15:40.933 "get_zone_info": false, 00:15:40.933 "zone_management": false, 00:15:40.933 "zone_append": false, 00:15:40.933 "compare": false, 00:15:40.933 "compare_and_write": false, 00:15:40.933 "abort": true, 00:15:40.933 "seek_hole": false, 00:15:40.933 "seek_data": false, 00:15:40.933 "copy": true, 00:15:40.933 "nvme_iov_md": false 00:15:40.933 }, 00:15:40.933 "memory_domains": [ 00:15:41.192 { 00:15:41.192 "dma_device_id": "system", 00:15:41.192 "dma_device_type": 1 00:15:41.192 }, 00:15:41.192 { 00:15:41.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.192 "dma_device_type": 2 00:15:41.192 } 00:15:41.192 ], 00:15:41.192 "driver_specific": {} 00:15:41.192 } 00:15:41.192 ] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.192 BaseBdev3 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.192 [ 00:15:41.192 { 00:15:41.192 "name": "BaseBdev3", 00:15:41.192 "aliases": [ 00:15:41.192 "98016db8-283f-4e52-81e5-5356d7e43475" 00:15:41.192 ], 00:15:41.192 "product_name": "Malloc disk", 00:15:41.192 "block_size": 512, 00:15:41.192 "num_blocks": 65536, 00:15:41.192 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:41.192 "assigned_rate_limits": { 00:15:41.192 "rw_ios_per_sec": 0, 00:15:41.192 "rw_mbytes_per_sec": 0, 00:15:41.192 "r_mbytes_per_sec": 0, 00:15:41.192 "w_mbytes_per_sec": 0 00:15:41.192 }, 00:15:41.192 "claimed": false, 00:15:41.192 "zoned": false, 00:15:41.192 "supported_io_types": { 00:15:41.192 "read": true, 00:15:41.192 "write": true, 00:15:41.192 "unmap": true, 00:15:41.192 "flush": true, 00:15:41.192 "reset": true, 00:15:41.192 "nvme_admin": false, 00:15:41.192 "nvme_io": false, 00:15:41.192 "nvme_io_md": false, 00:15:41.192 "write_zeroes": true, 00:15:41.192 "zcopy": true, 00:15:41.192 "get_zone_info": false, 00:15:41.192 "zone_management": false, 00:15:41.192 "zone_append": false, 00:15:41.192 "compare": false, 00:15:41.192 "compare_and_write": false, 00:15:41.192 "abort": true, 00:15:41.192 "seek_hole": false, 00:15:41.192 "seek_data": false, 00:15:41.192 "copy": true, 00:15:41.192 "nvme_iov_md": false 00:15:41.192 }, 00:15:41.192 "memory_domains": [ 00:15:41.192 { 00:15:41.192 "dma_device_id": "system", 00:15:41.192 "dma_device_type": 1 00:15:41.192 }, 00:15:41.192 { 00:15:41.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.192 "dma_device_type": 2 00:15:41.192 } 00:15:41.192 ], 00:15:41.192 "driver_specific": {} 00:15:41.192 } 00:15:41.192 ] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.192 BaseBdev4 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.192 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.192 [ 00:15:41.192 { 00:15:41.192 "name": "BaseBdev4", 00:15:41.192 "aliases": [ 00:15:41.192 "2f0d3833-3589-4df2-a41c-bda17cca3c15" 00:15:41.192 ], 00:15:41.192 "product_name": "Malloc disk", 00:15:41.192 "block_size": 512, 00:15:41.192 "num_blocks": 65536, 00:15:41.192 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:41.192 "assigned_rate_limits": { 00:15:41.192 "rw_ios_per_sec": 0, 00:15:41.192 "rw_mbytes_per_sec": 0, 00:15:41.192 "r_mbytes_per_sec": 0, 00:15:41.192 "w_mbytes_per_sec": 0 00:15:41.192 }, 00:15:41.192 "claimed": false, 00:15:41.192 "zoned": false, 00:15:41.192 "supported_io_types": { 00:15:41.192 "read": true, 00:15:41.192 "write": true, 00:15:41.192 "unmap": true, 00:15:41.192 "flush": true, 00:15:41.192 "reset": true, 00:15:41.192 "nvme_admin": false, 00:15:41.192 "nvme_io": false, 00:15:41.192 "nvme_io_md": false, 00:15:41.192 "write_zeroes": true, 00:15:41.192 "zcopy": true, 00:15:41.192 "get_zone_info": false, 00:15:41.192 "zone_management": false, 00:15:41.192 "zone_append": false, 00:15:41.192 "compare": false, 00:15:41.192 "compare_and_write": false, 00:15:41.192 "abort": true, 00:15:41.192 "seek_hole": false, 00:15:41.192 "seek_data": false, 00:15:41.192 "copy": true, 00:15:41.192 "nvme_iov_md": false 00:15:41.193 }, 00:15:41.193 "memory_domains": [ 00:15:41.193 { 00:15:41.193 "dma_device_id": "system", 00:15:41.193 "dma_device_type": 1 00:15:41.193 }, 00:15:41.193 { 00:15:41.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.193 "dma_device_type": 2 00:15:41.193 } 00:15:41.193 ], 00:15:41.193 "driver_specific": {} 00:15:41.193 } 00:15:41.193 ] 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.193 [2024-10-30 10:44:02.562924] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.193 [2024-10-30 10:44:02.563154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.193 [2024-10-30 10:44:02.563333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.193 [2024-10-30 10:44:02.565711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.193 [2024-10-30 10:44:02.565890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.193 "name": "Existed_Raid", 00:15:41.193 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:41.193 "strip_size_kb": 64, 00:15:41.193 "state": "configuring", 00:15:41.193 "raid_level": "concat", 00:15:41.193 "superblock": true, 00:15:41.193 "num_base_bdevs": 4, 00:15:41.193 "num_base_bdevs_discovered": 3, 00:15:41.193 "num_base_bdevs_operational": 4, 00:15:41.193 "base_bdevs_list": [ 00:15:41.193 { 00:15:41.193 "name": "BaseBdev1", 00:15:41.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.193 "is_configured": false, 00:15:41.193 "data_offset": 0, 00:15:41.193 "data_size": 0 00:15:41.193 }, 00:15:41.193 { 00:15:41.193 "name": "BaseBdev2", 00:15:41.193 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:41.193 "is_configured": true, 00:15:41.193 "data_offset": 2048, 00:15:41.193 "data_size": 63488 00:15:41.193 }, 00:15:41.193 { 00:15:41.193 "name": "BaseBdev3", 00:15:41.193 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:41.193 "is_configured": true, 00:15:41.193 "data_offset": 2048, 00:15:41.193 "data_size": 63488 00:15:41.193 }, 00:15:41.193 { 00:15:41.193 "name": "BaseBdev4", 00:15:41.193 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:41.193 "is_configured": true, 00:15:41.193 "data_offset": 2048, 00:15:41.193 "data_size": 63488 00:15:41.193 } 00:15:41.193 ] 00:15:41.193 }' 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.193 10:44:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.761 [2024-10-30 10:44:03.115169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.761 "name": "Existed_Raid", 00:15:41.761 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:41.761 "strip_size_kb": 64, 00:15:41.761 "state": "configuring", 00:15:41.761 "raid_level": "concat", 00:15:41.761 "superblock": true, 00:15:41.761 "num_base_bdevs": 4, 00:15:41.761 "num_base_bdevs_discovered": 2, 00:15:41.761 "num_base_bdevs_operational": 4, 00:15:41.761 "base_bdevs_list": [ 00:15:41.761 { 00:15:41.761 "name": "BaseBdev1", 00:15:41.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.761 "is_configured": false, 00:15:41.761 "data_offset": 0, 00:15:41.761 "data_size": 0 00:15:41.761 }, 00:15:41.761 { 00:15:41.761 "name": null, 00:15:41.761 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:41.761 "is_configured": false, 00:15:41.761 "data_offset": 0, 00:15:41.761 "data_size": 63488 00:15:41.761 }, 00:15:41.761 { 00:15:41.761 "name": "BaseBdev3", 00:15:41.761 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:41.761 "is_configured": true, 00:15:41.761 "data_offset": 2048, 00:15:41.761 "data_size": 63488 00:15:41.761 }, 00:15:41.761 { 00:15:41.761 "name": "BaseBdev4", 00:15:41.761 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:41.761 "is_configured": true, 00:15:41.761 "data_offset": 2048, 00:15:41.761 "data_size": 63488 00:15:41.761 } 00:15:41.761 ] 00:15:41.761 }' 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.761 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.330 [2024-10-30 10:44:03.728114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.330 BaseBdev1 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.330 [ 00:15:42.330 { 00:15:42.330 "name": "BaseBdev1", 00:15:42.330 "aliases": [ 00:15:42.330 "6721acb1-a677-4111-bca6-4f9ea7556c2a" 00:15:42.330 ], 00:15:42.330 "product_name": "Malloc disk", 00:15:42.330 "block_size": 512, 00:15:42.330 "num_blocks": 65536, 00:15:42.330 "uuid": "6721acb1-a677-4111-bca6-4f9ea7556c2a", 00:15:42.330 "assigned_rate_limits": { 00:15:42.330 "rw_ios_per_sec": 0, 00:15:42.330 "rw_mbytes_per_sec": 0, 00:15:42.330 "r_mbytes_per_sec": 0, 00:15:42.330 "w_mbytes_per_sec": 0 00:15:42.330 }, 00:15:42.330 "claimed": true, 00:15:42.330 "claim_type": "exclusive_write", 00:15:42.330 "zoned": false, 00:15:42.330 "supported_io_types": { 00:15:42.330 "read": true, 00:15:42.330 "write": true, 00:15:42.330 "unmap": true, 00:15:42.330 "flush": true, 00:15:42.330 "reset": true, 00:15:42.330 "nvme_admin": false, 00:15:42.330 "nvme_io": false, 00:15:42.330 "nvme_io_md": false, 00:15:42.330 "write_zeroes": true, 00:15:42.330 "zcopy": true, 00:15:42.330 "get_zone_info": false, 00:15:42.330 "zone_management": false, 00:15:42.330 "zone_append": false, 00:15:42.330 "compare": false, 00:15:42.330 "compare_and_write": false, 00:15:42.330 "abort": true, 00:15:42.330 "seek_hole": false, 00:15:42.330 "seek_data": false, 00:15:42.330 "copy": true, 00:15:42.330 "nvme_iov_md": false 00:15:42.330 }, 00:15:42.330 "memory_domains": [ 00:15:42.330 { 00:15:42.330 "dma_device_id": "system", 00:15:42.330 "dma_device_type": 1 00:15:42.330 }, 00:15:42.330 { 00:15:42.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.330 "dma_device_type": 2 00:15:42.330 } 00:15:42.330 ], 00:15:42.330 "driver_specific": {} 00:15:42.330 } 00:15:42.330 ] 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.330 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.331 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.592 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.592 "name": "Existed_Raid", 00:15:42.592 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:42.592 "strip_size_kb": 64, 00:15:42.592 "state": "configuring", 00:15:42.592 "raid_level": "concat", 00:15:42.592 "superblock": true, 00:15:42.592 "num_base_bdevs": 4, 00:15:42.592 "num_base_bdevs_discovered": 3, 00:15:42.592 "num_base_bdevs_operational": 4, 00:15:42.592 "base_bdevs_list": [ 00:15:42.592 { 00:15:42.592 "name": "BaseBdev1", 00:15:42.592 "uuid": "6721acb1-a677-4111-bca6-4f9ea7556c2a", 00:15:42.592 "is_configured": true, 00:15:42.592 "data_offset": 2048, 00:15:42.592 "data_size": 63488 00:15:42.592 }, 00:15:42.592 { 00:15:42.592 "name": null, 00:15:42.592 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:42.592 "is_configured": false, 00:15:42.592 "data_offset": 0, 00:15:42.592 "data_size": 63488 00:15:42.592 }, 00:15:42.592 { 00:15:42.592 "name": "BaseBdev3", 00:15:42.592 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:42.592 "is_configured": true, 00:15:42.592 "data_offset": 2048, 00:15:42.592 "data_size": 63488 00:15:42.592 }, 00:15:42.592 { 00:15:42.592 "name": "BaseBdev4", 00:15:42.592 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:42.592 "is_configured": true, 00:15:42.592 "data_offset": 2048, 00:15:42.592 "data_size": 63488 00:15:42.592 } 00:15:42.592 ] 00:15:42.592 }' 00:15:42.592 10:44:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.592 10:44:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.853 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:42.853 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.853 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.853 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.853 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.112 [2024-10-30 10:44:04.352347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.112 "name": "Existed_Raid", 00:15:43.112 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:43.112 "strip_size_kb": 64, 00:15:43.112 "state": "configuring", 00:15:43.112 "raid_level": "concat", 00:15:43.112 "superblock": true, 00:15:43.112 "num_base_bdevs": 4, 00:15:43.112 "num_base_bdevs_discovered": 2, 00:15:43.112 "num_base_bdevs_operational": 4, 00:15:43.112 "base_bdevs_list": [ 00:15:43.112 { 00:15:43.112 "name": "BaseBdev1", 00:15:43.112 "uuid": "6721acb1-a677-4111-bca6-4f9ea7556c2a", 00:15:43.112 "is_configured": true, 00:15:43.112 "data_offset": 2048, 00:15:43.112 "data_size": 63488 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "name": null, 00:15:43.112 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:43.112 "is_configured": false, 00:15:43.112 "data_offset": 0, 00:15:43.112 "data_size": 63488 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "name": null, 00:15:43.112 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:43.112 "is_configured": false, 00:15:43.112 "data_offset": 0, 00:15:43.112 "data_size": 63488 00:15:43.112 }, 00:15:43.112 { 00:15:43.112 "name": "BaseBdev4", 00:15:43.112 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:43.112 "is_configured": true, 00:15:43.112 "data_offset": 2048, 00:15:43.112 "data_size": 63488 00:15:43.112 } 00:15:43.112 ] 00:15:43.112 }' 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.112 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.679 [2024-10-30 10:44:04.956601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.679 10:44:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.679 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.679 "name": "Existed_Raid", 00:15:43.679 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:43.679 "strip_size_kb": 64, 00:15:43.679 "state": "configuring", 00:15:43.679 "raid_level": "concat", 00:15:43.679 "superblock": true, 00:15:43.679 "num_base_bdevs": 4, 00:15:43.679 "num_base_bdevs_discovered": 3, 00:15:43.679 "num_base_bdevs_operational": 4, 00:15:43.679 "base_bdevs_list": [ 00:15:43.679 { 00:15:43.679 "name": "BaseBdev1", 00:15:43.679 "uuid": "6721acb1-a677-4111-bca6-4f9ea7556c2a", 00:15:43.679 "is_configured": true, 00:15:43.679 "data_offset": 2048, 00:15:43.679 "data_size": 63488 00:15:43.679 }, 00:15:43.679 { 00:15:43.679 "name": null, 00:15:43.679 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:43.679 "is_configured": false, 00:15:43.679 "data_offset": 0, 00:15:43.679 "data_size": 63488 00:15:43.679 }, 00:15:43.679 { 00:15:43.679 "name": "BaseBdev3", 00:15:43.679 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:43.679 "is_configured": true, 00:15:43.679 "data_offset": 2048, 00:15:43.679 "data_size": 63488 00:15:43.679 }, 00:15:43.679 { 00:15:43.679 "name": "BaseBdev4", 00:15:43.679 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:43.679 "is_configured": true, 00:15:43.679 "data_offset": 2048, 00:15:43.679 "data_size": 63488 00:15:43.679 } 00:15:43.679 ] 00:15:43.679 }' 00:15:43.679 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.679 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.248 [2024-10-30 10:44:05.524804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.248 "name": "Existed_Raid", 00:15:44.248 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:44.248 "strip_size_kb": 64, 00:15:44.248 "state": "configuring", 00:15:44.248 "raid_level": "concat", 00:15:44.248 "superblock": true, 00:15:44.248 "num_base_bdevs": 4, 00:15:44.248 "num_base_bdevs_discovered": 2, 00:15:44.248 "num_base_bdevs_operational": 4, 00:15:44.248 "base_bdevs_list": [ 00:15:44.248 { 00:15:44.248 "name": null, 00:15:44.248 "uuid": "6721acb1-a677-4111-bca6-4f9ea7556c2a", 00:15:44.248 "is_configured": false, 00:15:44.248 "data_offset": 0, 00:15:44.248 "data_size": 63488 00:15:44.248 }, 00:15:44.248 { 00:15:44.248 "name": null, 00:15:44.248 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:44.248 "is_configured": false, 00:15:44.248 "data_offset": 0, 00:15:44.248 "data_size": 63488 00:15:44.248 }, 00:15:44.248 { 00:15:44.248 "name": "BaseBdev3", 00:15:44.248 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:44.248 "is_configured": true, 00:15:44.248 "data_offset": 2048, 00:15:44.248 "data_size": 63488 00:15:44.248 }, 00:15:44.248 { 00:15:44.248 "name": "BaseBdev4", 00:15:44.248 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:44.248 "is_configured": true, 00:15:44.248 "data_offset": 2048, 00:15:44.248 "data_size": 63488 00:15:44.248 } 00:15:44.248 ] 00:15:44.248 }' 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.248 10:44:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.817 [2024-10-30 10:44:06.208282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.817 "name": "Existed_Raid", 00:15:44.817 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:44.817 "strip_size_kb": 64, 00:15:44.817 "state": "configuring", 00:15:44.817 "raid_level": "concat", 00:15:44.817 "superblock": true, 00:15:44.817 "num_base_bdevs": 4, 00:15:44.817 "num_base_bdevs_discovered": 3, 00:15:44.817 "num_base_bdevs_operational": 4, 00:15:44.817 "base_bdevs_list": [ 00:15:44.817 { 00:15:44.817 "name": null, 00:15:44.817 "uuid": "6721acb1-a677-4111-bca6-4f9ea7556c2a", 00:15:44.817 "is_configured": false, 00:15:44.817 "data_offset": 0, 00:15:44.817 "data_size": 63488 00:15:44.817 }, 00:15:44.817 { 00:15:44.817 "name": "BaseBdev2", 00:15:44.817 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:44.817 "is_configured": true, 00:15:44.817 "data_offset": 2048, 00:15:44.817 "data_size": 63488 00:15:44.817 }, 00:15:44.817 { 00:15:44.817 "name": "BaseBdev3", 00:15:44.817 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:44.817 "is_configured": true, 00:15:44.817 "data_offset": 2048, 00:15:44.817 "data_size": 63488 00:15:44.817 }, 00:15:44.817 { 00:15:44.817 "name": "BaseBdev4", 00:15:44.817 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:44.817 "is_configured": true, 00:15:44.817 "data_offset": 2048, 00:15:44.817 "data_size": 63488 00:15:44.817 } 00:15:44.817 ] 00:15:44.817 }' 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.817 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6721acb1-a677-4111-bca6-4f9ea7556c2a 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.448 [2024-10-30 10:44:06.889062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:45.448 [2024-10-30 10:44:06.889343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:45.448 [2024-10-30 10:44:06.889360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:45.448 [2024-10-30 10:44:06.889666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:45.448 [2024-10-30 10:44:06.889832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:45.448 [2024-10-30 10:44:06.889852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:45.448 [2024-10-30 10:44:06.890047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.448 NewBaseBdev 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.448 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.448 [ 00:15:45.448 { 00:15:45.448 "name": "NewBaseBdev", 00:15:45.448 "aliases": [ 00:15:45.448 "6721acb1-a677-4111-bca6-4f9ea7556c2a" 00:15:45.448 ], 00:15:45.448 "product_name": "Malloc disk", 00:15:45.448 "block_size": 512, 00:15:45.448 "num_blocks": 65536, 00:15:45.448 "uuid": "6721acb1-a677-4111-bca6-4f9ea7556c2a", 00:15:45.448 "assigned_rate_limits": { 00:15:45.448 "rw_ios_per_sec": 0, 00:15:45.448 "rw_mbytes_per_sec": 0, 00:15:45.448 "r_mbytes_per_sec": 0, 00:15:45.448 "w_mbytes_per_sec": 0 00:15:45.448 }, 00:15:45.448 "claimed": true, 00:15:45.448 "claim_type": "exclusive_write", 00:15:45.448 "zoned": false, 00:15:45.448 "supported_io_types": { 00:15:45.448 "read": true, 00:15:45.448 "write": true, 00:15:45.448 "unmap": true, 00:15:45.448 "flush": true, 00:15:45.448 "reset": true, 00:15:45.708 "nvme_admin": false, 00:15:45.708 "nvme_io": false, 00:15:45.708 "nvme_io_md": false, 00:15:45.708 "write_zeroes": true, 00:15:45.708 "zcopy": true, 00:15:45.708 "get_zone_info": false, 00:15:45.708 "zone_management": false, 00:15:45.708 "zone_append": false, 00:15:45.708 "compare": false, 00:15:45.708 "compare_and_write": false, 00:15:45.708 "abort": true, 00:15:45.708 "seek_hole": false, 00:15:45.708 "seek_data": false, 00:15:45.708 "copy": true, 00:15:45.708 "nvme_iov_md": false 00:15:45.708 }, 00:15:45.708 "memory_domains": [ 00:15:45.708 { 00:15:45.708 "dma_device_id": "system", 00:15:45.708 "dma_device_type": 1 00:15:45.708 }, 00:15:45.708 { 00:15:45.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.708 "dma_device_type": 2 00:15:45.708 } 00:15:45.708 ], 00:15:45.708 "driver_specific": {} 00:15:45.708 } 00:15:45.708 ] 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.708 "name": "Existed_Raid", 00:15:45.708 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:45.708 "strip_size_kb": 64, 00:15:45.708 "state": "online", 00:15:45.708 "raid_level": "concat", 00:15:45.708 "superblock": true, 00:15:45.708 "num_base_bdevs": 4, 00:15:45.708 "num_base_bdevs_discovered": 4, 00:15:45.708 "num_base_bdevs_operational": 4, 00:15:45.708 "base_bdevs_list": [ 00:15:45.708 { 00:15:45.708 "name": "NewBaseBdev", 00:15:45.708 "uuid": "6721acb1-a677-4111-bca6-4f9ea7556c2a", 00:15:45.708 "is_configured": true, 00:15:45.708 "data_offset": 2048, 00:15:45.708 "data_size": 63488 00:15:45.708 }, 00:15:45.708 { 00:15:45.708 "name": "BaseBdev2", 00:15:45.708 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:45.708 "is_configured": true, 00:15:45.708 "data_offset": 2048, 00:15:45.708 "data_size": 63488 00:15:45.708 }, 00:15:45.708 { 00:15:45.708 "name": "BaseBdev3", 00:15:45.708 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:45.708 "is_configured": true, 00:15:45.708 "data_offset": 2048, 00:15:45.708 "data_size": 63488 00:15:45.708 }, 00:15:45.708 { 00:15:45.708 "name": "BaseBdev4", 00:15:45.708 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:45.708 "is_configured": true, 00:15:45.708 "data_offset": 2048, 00:15:45.708 "data_size": 63488 00:15:45.708 } 00:15:45.708 ] 00:15:45.708 }' 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.708 10:44:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.276 [2024-10-30 10:44:07.449728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.276 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.276 "name": "Existed_Raid", 00:15:46.276 "aliases": [ 00:15:46.276 "d30d596f-226e-48cd-85fb-a43e5a8a3b9b" 00:15:46.276 ], 00:15:46.276 "product_name": "Raid Volume", 00:15:46.276 "block_size": 512, 00:15:46.276 "num_blocks": 253952, 00:15:46.276 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:46.276 "assigned_rate_limits": { 00:15:46.276 "rw_ios_per_sec": 0, 00:15:46.276 "rw_mbytes_per_sec": 0, 00:15:46.276 "r_mbytes_per_sec": 0, 00:15:46.276 "w_mbytes_per_sec": 0 00:15:46.276 }, 00:15:46.276 "claimed": false, 00:15:46.276 "zoned": false, 00:15:46.276 "supported_io_types": { 00:15:46.276 "read": true, 00:15:46.276 "write": true, 00:15:46.277 "unmap": true, 00:15:46.277 "flush": true, 00:15:46.277 "reset": true, 00:15:46.277 "nvme_admin": false, 00:15:46.277 "nvme_io": false, 00:15:46.277 "nvme_io_md": false, 00:15:46.277 "write_zeroes": true, 00:15:46.277 "zcopy": false, 00:15:46.277 "get_zone_info": false, 00:15:46.277 "zone_management": false, 00:15:46.277 "zone_append": false, 00:15:46.277 "compare": false, 00:15:46.277 "compare_and_write": false, 00:15:46.277 "abort": false, 00:15:46.277 "seek_hole": false, 00:15:46.277 "seek_data": false, 00:15:46.277 "copy": false, 00:15:46.277 "nvme_iov_md": false 00:15:46.277 }, 00:15:46.277 "memory_domains": [ 00:15:46.277 { 00:15:46.277 "dma_device_id": "system", 00:15:46.277 "dma_device_type": 1 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.277 "dma_device_type": 2 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "dma_device_id": "system", 00:15:46.277 "dma_device_type": 1 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.277 "dma_device_type": 2 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "dma_device_id": "system", 00:15:46.277 "dma_device_type": 1 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.277 "dma_device_type": 2 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "dma_device_id": "system", 00:15:46.277 "dma_device_type": 1 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.277 "dma_device_type": 2 00:15:46.277 } 00:15:46.277 ], 00:15:46.277 "driver_specific": { 00:15:46.277 "raid": { 00:15:46.277 "uuid": "d30d596f-226e-48cd-85fb-a43e5a8a3b9b", 00:15:46.277 "strip_size_kb": 64, 00:15:46.277 "state": "online", 00:15:46.277 "raid_level": "concat", 00:15:46.277 "superblock": true, 00:15:46.277 "num_base_bdevs": 4, 00:15:46.277 "num_base_bdevs_discovered": 4, 00:15:46.277 "num_base_bdevs_operational": 4, 00:15:46.277 "base_bdevs_list": [ 00:15:46.277 { 00:15:46.277 "name": "NewBaseBdev", 00:15:46.277 "uuid": "6721acb1-a677-4111-bca6-4f9ea7556c2a", 00:15:46.277 "is_configured": true, 00:15:46.277 "data_offset": 2048, 00:15:46.277 "data_size": 63488 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "name": "BaseBdev2", 00:15:46.277 "uuid": "e590f5e8-49e1-4312-a90e-2318c1c74d06", 00:15:46.277 "is_configured": true, 00:15:46.277 "data_offset": 2048, 00:15:46.277 "data_size": 63488 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "name": "BaseBdev3", 00:15:46.277 "uuid": "98016db8-283f-4e52-81e5-5356d7e43475", 00:15:46.277 "is_configured": true, 00:15:46.277 "data_offset": 2048, 00:15:46.277 "data_size": 63488 00:15:46.277 }, 00:15:46.277 { 00:15:46.277 "name": "BaseBdev4", 00:15:46.277 "uuid": "2f0d3833-3589-4df2-a41c-bda17cca3c15", 00:15:46.277 "is_configured": true, 00:15:46.277 "data_offset": 2048, 00:15:46.277 "data_size": 63488 00:15:46.277 } 00:15:46.277 ] 00:15:46.277 } 00:15:46.277 } 00:15:46.277 }' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:46.277 BaseBdev2 00:15:46.277 BaseBdev3 00:15:46.277 BaseBdev4' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.277 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.545 [2024-10-30 10:44:07.825357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.545 [2024-10-30 10:44:07.825396] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.545 [2024-10-30 10:44:07.825488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.545 [2024-10-30 10:44:07.825574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.545 [2024-10-30 10:44:07.825590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72251 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72251 ']' 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72251 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72251 00:15:46.545 killing process with pid 72251 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72251' 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72251 00:15:46.545 [2024-10-30 10:44:07.867934] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.545 10:44:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72251 00:15:46.804 [2024-10-30 10:44:08.205420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.181 ************************************ 00:15:48.181 END TEST raid_state_function_test_sb 00:15:48.181 ************************************ 00:15:48.181 10:44:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:48.181 00:15:48.181 real 0m12.894s 00:15:48.181 user 0m21.455s 00:15:48.181 sys 0m1.804s 00:15:48.181 10:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:48.181 10:44:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.181 10:44:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:48.181 10:44:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:48.181 10:44:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:48.181 10:44:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.181 ************************************ 00:15:48.181 START TEST raid_superblock_test 00:15:48.181 ************************************ 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72928 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72928 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72928 ']' 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:48.181 10:44:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.182 [2024-10-30 10:44:09.382401] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:15:48.182 [2024-10-30 10:44:09.382593] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72928 ] 00:15:48.182 [2024-10-30 10:44:09.569575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.440 [2024-10-30 10:44:09.696737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.440 [2024-10-30 10:44:09.897914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.440 [2024-10-30 10:44:09.898002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.007 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.008 malloc1 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.008 [2024-10-30 10:44:10.430030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.008 [2024-10-30 10:44:10.430123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.008 [2024-10-30 10:44:10.430157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:49.008 [2024-10-30 10:44:10.430174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.008 [2024-10-30 10:44:10.432990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.008 [2024-10-30 10:44:10.433053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.008 pt1 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.008 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 malloc2 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 [2024-10-30 10:44:10.485391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.385 [2024-10-30 10:44:10.485472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.385 [2024-10-30 10:44:10.485501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:49.385 [2024-10-30 10:44:10.485514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.385 [2024-10-30 10:44:10.488297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.385 [2024-10-30 10:44:10.488358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.385 pt2 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 malloc3 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 [2024-10-30 10:44:10.552477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:49.385 [2024-10-30 10:44:10.552554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.385 [2024-10-30 10:44:10.552586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:49.385 [2024-10-30 10:44:10.552601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.385 [2024-10-30 10:44:10.555358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.385 [2024-10-30 10:44:10.555463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:49.385 pt3 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 malloc4 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 [2024-10-30 10:44:10.604733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:49.385 [2024-10-30 10:44:10.604815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.385 [2024-10-30 10:44:10.604842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:49.385 [2024-10-30 10:44:10.604857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.385 [2024-10-30 10:44:10.607610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.385 [2024-10-30 10:44:10.607674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:49.385 pt4 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.385 [2024-10-30 10:44:10.612783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.385 [2024-10-30 10:44:10.615190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.385 [2024-10-30 10:44:10.615286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:49.385 [2024-10-30 10:44:10.615378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:49.385 [2024-10-30 10:44:10.615630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:49.385 [2024-10-30 10:44:10.615657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:49.385 [2024-10-30 10:44:10.615986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:49.385 [2024-10-30 10:44:10.616221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:49.385 [2024-10-30 10:44:10.616251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:49.385 [2024-10-30 10:44:10.616424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.385 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.386 "name": "raid_bdev1", 00:15:49.386 "uuid": "16479918-dd78-471c-a8b0-ba1d6252aa82", 00:15:49.386 "strip_size_kb": 64, 00:15:49.386 "state": "online", 00:15:49.386 "raid_level": "concat", 00:15:49.386 "superblock": true, 00:15:49.386 "num_base_bdevs": 4, 00:15:49.386 "num_base_bdevs_discovered": 4, 00:15:49.386 "num_base_bdevs_operational": 4, 00:15:49.386 "base_bdevs_list": [ 00:15:49.386 { 00:15:49.386 "name": "pt1", 00:15:49.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.386 "is_configured": true, 00:15:49.386 "data_offset": 2048, 00:15:49.386 "data_size": 63488 00:15:49.386 }, 00:15:49.386 { 00:15:49.386 "name": "pt2", 00:15:49.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.386 "is_configured": true, 00:15:49.386 "data_offset": 2048, 00:15:49.386 "data_size": 63488 00:15:49.386 }, 00:15:49.386 { 00:15:49.386 "name": "pt3", 00:15:49.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.386 "is_configured": true, 00:15:49.386 "data_offset": 2048, 00:15:49.386 "data_size": 63488 00:15:49.386 }, 00:15:49.386 { 00:15:49.386 "name": "pt4", 00:15:49.386 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.386 "is_configured": true, 00:15:49.386 "data_offset": 2048, 00:15:49.386 "data_size": 63488 00:15:49.386 } 00:15:49.386 ] 00:15:49.386 }' 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.386 10:44:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.644 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:49.644 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:49.644 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:49.644 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:49.644 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:49.644 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:49.644 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.644 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.644 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.902 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:49.902 [2024-10-30 10:44:11.117339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.902 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.902 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:49.902 "name": "raid_bdev1", 00:15:49.902 "aliases": [ 00:15:49.902 "16479918-dd78-471c-a8b0-ba1d6252aa82" 00:15:49.902 ], 00:15:49.902 "product_name": "Raid Volume", 00:15:49.902 "block_size": 512, 00:15:49.902 "num_blocks": 253952, 00:15:49.902 "uuid": "16479918-dd78-471c-a8b0-ba1d6252aa82", 00:15:49.902 "assigned_rate_limits": { 00:15:49.902 "rw_ios_per_sec": 0, 00:15:49.902 "rw_mbytes_per_sec": 0, 00:15:49.902 "r_mbytes_per_sec": 0, 00:15:49.902 "w_mbytes_per_sec": 0 00:15:49.902 }, 00:15:49.902 "claimed": false, 00:15:49.902 "zoned": false, 00:15:49.902 "supported_io_types": { 00:15:49.902 "read": true, 00:15:49.902 "write": true, 00:15:49.902 "unmap": true, 00:15:49.902 "flush": true, 00:15:49.902 "reset": true, 00:15:49.902 "nvme_admin": false, 00:15:49.902 "nvme_io": false, 00:15:49.902 "nvme_io_md": false, 00:15:49.902 "write_zeroes": true, 00:15:49.902 "zcopy": false, 00:15:49.902 "get_zone_info": false, 00:15:49.902 "zone_management": false, 00:15:49.902 "zone_append": false, 00:15:49.902 "compare": false, 00:15:49.902 "compare_and_write": false, 00:15:49.902 "abort": false, 00:15:49.902 "seek_hole": false, 00:15:49.902 "seek_data": false, 00:15:49.902 "copy": false, 00:15:49.902 "nvme_iov_md": false 00:15:49.902 }, 00:15:49.902 "memory_domains": [ 00:15:49.902 { 00:15:49.902 "dma_device_id": "system", 00:15:49.902 "dma_device_type": 1 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.902 "dma_device_type": 2 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "dma_device_id": "system", 00:15:49.902 "dma_device_type": 1 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.902 "dma_device_type": 2 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "dma_device_id": "system", 00:15:49.902 "dma_device_type": 1 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.902 "dma_device_type": 2 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "dma_device_id": "system", 00:15:49.902 "dma_device_type": 1 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.902 "dma_device_type": 2 00:15:49.902 } 00:15:49.902 ], 00:15:49.902 "driver_specific": { 00:15:49.902 "raid": { 00:15:49.902 "uuid": "16479918-dd78-471c-a8b0-ba1d6252aa82", 00:15:49.902 "strip_size_kb": 64, 00:15:49.902 "state": "online", 00:15:49.902 "raid_level": "concat", 00:15:49.902 "superblock": true, 00:15:49.902 "num_base_bdevs": 4, 00:15:49.902 "num_base_bdevs_discovered": 4, 00:15:49.902 "num_base_bdevs_operational": 4, 00:15:49.902 "base_bdevs_list": [ 00:15:49.902 { 00:15:49.902 "name": "pt1", 00:15:49.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.902 "is_configured": true, 00:15:49.902 "data_offset": 2048, 00:15:49.902 "data_size": 63488 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "name": "pt2", 00:15:49.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.902 "is_configured": true, 00:15:49.902 "data_offset": 2048, 00:15:49.902 "data_size": 63488 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "name": "pt3", 00:15:49.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.902 "is_configured": true, 00:15:49.902 "data_offset": 2048, 00:15:49.902 "data_size": 63488 00:15:49.902 }, 00:15:49.902 { 00:15:49.902 "name": "pt4", 00:15:49.902 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.902 "is_configured": true, 00:15:49.902 "data_offset": 2048, 00:15:49.902 "data_size": 63488 00:15:49.902 } 00:15:49.903 ] 00:15:49.903 } 00:15:49.903 } 00:15:49.903 }' 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:49.903 pt2 00:15:49.903 pt3 00:15:49.903 pt4' 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.903 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.160 [2024-10-30 10:44:11.489387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=16479918-dd78-471c-a8b0-ba1d6252aa82 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 16479918-dd78-471c-a8b0-ba1d6252aa82 ']' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.160 [2024-10-30 10:44:11.541017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.160 [2024-10-30 10:44:11.541048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.160 [2024-10-30 10:44:11.541137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.160 [2024-10-30 10:44:11.541222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.160 [2024-10-30 10:44:11.541243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.160 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.418 [2024-10-30 10:44:11.689082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:50.418 [2024-10-30 10:44:11.691531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:50.418 [2024-10-30 10:44:11.691603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:50.418 [2024-10-30 10:44:11.691654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:50.418 [2024-10-30 10:44:11.691722] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:50.418 [2024-10-30 10:44:11.691791] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:50.418 [2024-10-30 10:44:11.691823] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:50.418 [2024-10-30 10:44:11.691854] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:50.418 [2024-10-30 10:44:11.691875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.418 [2024-10-30 10:44:11.691890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:50.418 request: 00:15:50.418 { 00:15:50.418 "name": "raid_bdev1", 00:15:50.418 "raid_level": "concat", 00:15:50.418 "base_bdevs": [ 00:15:50.418 "malloc1", 00:15:50.418 "malloc2", 00:15:50.418 "malloc3", 00:15:50.418 "malloc4" 00:15:50.418 ], 00:15:50.418 "strip_size_kb": 64, 00:15:50.418 "superblock": false, 00:15:50.418 "method": "bdev_raid_create", 00:15:50.418 "req_id": 1 00:15:50.418 } 00:15:50.418 Got JSON-RPC error response 00:15:50.418 response: 00:15:50.418 { 00:15:50.418 "code": -17, 00:15:50.418 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:50.418 } 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.418 [2024-10-30 10:44:11.753075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.418 [2024-10-30 10:44:11.753147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.418 [2024-10-30 10:44:11.753172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:50.418 [2024-10-30 10:44:11.753189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.418 [2024-10-30 10:44:11.756017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.418 [2024-10-30 10:44:11.756085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.418 [2024-10-30 10:44:11.756181] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:50.418 [2024-10-30 10:44:11.756257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.418 pt1 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.418 "name": "raid_bdev1", 00:15:50.418 "uuid": "16479918-dd78-471c-a8b0-ba1d6252aa82", 00:15:50.418 "strip_size_kb": 64, 00:15:50.418 "state": "configuring", 00:15:50.418 "raid_level": "concat", 00:15:50.418 "superblock": true, 00:15:50.418 "num_base_bdevs": 4, 00:15:50.418 "num_base_bdevs_discovered": 1, 00:15:50.418 "num_base_bdevs_operational": 4, 00:15:50.418 "base_bdevs_list": [ 00:15:50.418 { 00:15:50.418 "name": "pt1", 00:15:50.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.418 "is_configured": true, 00:15:50.418 "data_offset": 2048, 00:15:50.418 "data_size": 63488 00:15:50.418 }, 00:15:50.418 { 00:15:50.418 "name": null, 00:15:50.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.418 "is_configured": false, 00:15:50.418 "data_offset": 2048, 00:15:50.418 "data_size": 63488 00:15:50.418 }, 00:15:50.418 { 00:15:50.418 "name": null, 00:15:50.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.418 "is_configured": false, 00:15:50.418 "data_offset": 2048, 00:15:50.418 "data_size": 63488 00:15:50.418 }, 00:15:50.418 { 00:15:50.418 "name": null, 00:15:50.418 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.418 "is_configured": false, 00:15:50.418 "data_offset": 2048, 00:15:50.418 "data_size": 63488 00:15:50.418 } 00:15:50.418 ] 00:15:50.418 }' 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.418 10:44:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.983 [2024-10-30 10:44:12.285290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.983 [2024-10-30 10:44:12.285383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.983 [2024-10-30 10:44:12.285412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:50.983 [2024-10-30 10:44:12.285441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.983 [2024-10-30 10:44:12.286104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.983 [2024-10-30 10:44:12.286157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.983 [2024-10-30 10:44:12.286284] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.983 [2024-10-30 10:44:12.286328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.983 pt2 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.983 [2024-10-30 10:44:12.293258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.983 "name": "raid_bdev1", 00:15:50.983 "uuid": "16479918-dd78-471c-a8b0-ba1d6252aa82", 00:15:50.983 "strip_size_kb": 64, 00:15:50.983 "state": "configuring", 00:15:50.983 "raid_level": "concat", 00:15:50.983 "superblock": true, 00:15:50.983 "num_base_bdevs": 4, 00:15:50.983 "num_base_bdevs_discovered": 1, 00:15:50.983 "num_base_bdevs_operational": 4, 00:15:50.983 "base_bdevs_list": [ 00:15:50.983 { 00:15:50.983 "name": "pt1", 00:15:50.983 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.983 "is_configured": true, 00:15:50.983 "data_offset": 2048, 00:15:50.983 "data_size": 63488 00:15:50.983 }, 00:15:50.983 { 00:15:50.983 "name": null, 00:15:50.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.983 "is_configured": false, 00:15:50.983 "data_offset": 0, 00:15:50.983 "data_size": 63488 00:15:50.983 }, 00:15:50.983 { 00:15:50.983 "name": null, 00:15:50.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.983 "is_configured": false, 00:15:50.983 "data_offset": 2048, 00:15:50.983 "data_size": 63488 00:15:50.983 }, 00:15:50.983 { 00:15:50.983 "name": null, 00:15:50.983 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.983 "is_configured": false, 00:15:50.983 "data_offset": 2048, 00:15:50.983 "data_size": 63488 00:15:50.983 } 00:15:50.983 ] 00:15:50.983 }' 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.983 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.545 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:51.545 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.545 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.545 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.545 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.545 [2024-10-30 10:44:12.845463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.545 [2024-10-30 10:44:12.845543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.545 [2024-10-30 10:44:12.845573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:51.545 [2024-10-30 10:44:12.845588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.545 [2024-10-30 10:44:12.846161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.545 [2024-10-30 10:44:12.846199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.545 [2024-10-30 10:44:12.846321] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.545 [2024-10-30 10:44:12.846370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.545 pt2 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.546 [2024-10-30 10:44:12.853467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.546 [2024-10-30 10:44:12.853547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.546 [2024-10-30 10:44:12.853591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:51.546 [2024-10-30 10:44:12.853611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.546 [2024-10-30 10:44:12.854216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.546 [2024-10-30 10:44:12.854263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.546 [2024-10-30 10:44:12.854383] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:51.546 [2024-10-30 10:44:12.854431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.546 pt3 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.546 [2024-10-30 10:44:12.861438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:51.546 [2024-10-30 10:44:12.861514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.546 [2024-10-30 10:44:12.861551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:51.546 [2024-10-30 10:44:12.861569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.546 [2024-10-30 10:44:12.862202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.546 [2024-10-30 10:44:12.862250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:51.546 [2024-10-30 10:44:12.862364] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:51.546 [2024-10-30 10:44:12.862401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:51.546 [2024-10-30 10:44:12.862614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.546 [2024-10-30 10:44:12.862643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:51.546 [2024-10-30 10:44:12.863011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:51.546 [2024-10-30 10:44:12.863265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.546 [2024-10-30 10:44:12.863313] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:51.546 [2024-10-30 10:44:12.863517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.546 pt4 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.546 "name": "raid_bdev1", 00:15:51.546 "uuid": "16479918-dd78-471c-a8b0-ba1d6252aa82", 00:15:51.546 "strip_size_kb": 64, 00:15:51.546 "state": "online", 00:15:51.546 "raid_level": "concat", 00:15:51.546 "superblock": true, 00:15:51.546 "num_base_bdevs": 4, 00:15:51.546 "num_base_bdevs_discovered": 4, 00:15:51.546 "num_base_bdevs_operational": 4, 00:15:51.546 "base_bdevs_list": [ 00:15:51.546 { 00:15:51.546 "name": "pt1", 00:15:51.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.546 "is_configured": true, 00:15:51.546 "data_offset": 2048, 00:15:51.546 "data_size": 63488 00:15:51.546 }, 00:15:51.546 { 00:15:51.546 "name": "pt2", 00:15:51.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.546 "is_configured": true, 00:15:51.546 "data_offset": 2048, 00:15:51.546 "data_size": 63488 00:15:51.546 }, 00:15:51.546 { 00:15:51.546 "name": "pt3", 00:15:51.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.546 "is_configured": true, 00:15:51.546 "data_offset": 2048, 00:15:51.546 "data_size": 63488 00:15:51.546 }, 00:15:51.546 { 00:15:51.546 "name": "pt4", 00:15:51.546 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.546 "is_configured": true, 00:15:51.546 "data_offset": 2048, 00:15:51.546 "data_size": 63488 00:15:51.546 } 00:15:51.546 ] 00:15:51.546 }' 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.546 10:44:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.113 [2024-10-30 10:44:13.438071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.113 "name": "raid_bdev1", 00:15:52.113 "aliases": [ 00:15:52.113 "16479918-dd78-471c-a8b0-ba1d6252aa82" 00:15:52.113 ], 00:15:52.113 "product_name": "Raid Volume", 00:15:52.113 "block_size": 512, 00:15:52.113 "num_blocks": 253952, 00:15:52.113 "uuid": "16479918-dd78-471c-a8b0-ba1d6252aa82", 00:15:52.113 "assigned_rate_limits": { 00:15:52.113 "rw_ios_per_sec": 0, 00:15:52.113 "rw_mbytes_per_sec": 0, 00:15:52.113 "r_mbytes_per_sec": 0, 00:15:52.113 "w_mbytes_per_sec": 0 00:15:52.113 }, 00:15:52.113 "claimed": false, 00:15:52.113 "zoned": false, 00:15:52.113 "supported_io_types": { 00:15:52.113 "read": true, 00:15:52.113 "write": true, 00:15:52.113 "unmap": true, 00:15:52.113 "flush": true, 00:15:52.113 "reset": true, 00:15:52.113 "nvme_admin": false, 00:15:52.113 "nvme_io": false, 00:15:52.113 "nvme_io_md": false, 00:15:52.113 "write_zeroes": true, 00:15:52.113 "zcopy": false, 00:15:52.113 "get_zone_info": false, 00:15:52.113 "zone_management": false, 00:15:52.113 "zone_append": false, 00:15:52.113 "compare": false, 00:15:52.113 "compare_and_write": false, 00:15:52.113 "abort": false, 00:15:52.113 "seek_hole": false, 00:15:52.113 "seek_data": false, 00:15:52.113 "copy": false, 00:15:52.113 "nvme_iov_md": false 00:15:52.113 }, 00:15:52.113 "memory_domains": [ 00:15:52.113 { 00:15:52.113 "dma_device_id": "system", 00:15:52.113 "dma_device_type": 1 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.113 "dma_device_type": 2 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "dma_device_id": "system", 00:15:52.113 "dma_device_type": 1 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.113 "dma_device_type": 2 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "dma_device_id": "system", 00:15:52.113 "dma_device_type": 1 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.113 "dma_device_type": 2 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "dma_device_id": "system", 00:15:52.113 "dma_device_type": 1 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.113 "dma_device_type": 2 00:15:52.113 } 00:15:52.113 ], 00:15:52.113 "driver_specific": { 00:15:52.113 "raid": { 00:15:52.113 "uuid": "16479918-dd78-471c-a8b0-ba1d6252aa82", 00:15:52.113 "strip_size_kb": 64, 00:15:52.113 "state": "online", 00:15:52.113 "raid_level": "concat", 00:15:52.113 "superblock": true, 00:15:52.113 "num_base_bdevs": 4, 00:15:52.113 "num_base_bdevs_discovered": 4, 00:15:52.113 "num_base_bdevs_operational": 4, 00:15:52.113 "base_bdevs_list": [ 00:15:52.113 { 00:15:52.113 "name": "pt1", 00:15:52.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.113 "is_configured": true, 00:15:52.113 "data_offset": 2048, 00:15:52.113 "data_size": 63488 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "name": "pt2", 00:15:52.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.113 "is_configured": true, 00:15:52.113 "data_offset": 2048, 00:15:52.113 "data_size": 63488 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "name": "pt3", 00:15:52.113 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.113 "is_configured": true, 00:15:52.113 "data_offset": 2048, 00:15:52.113 "data_size": 63488 00:15:52.113 }, 00:15:52.113 { 00:15:52.113 "name": "pt4", 00:15:52.113 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:52.113 "is_configured": true, 00:15:52.113 "data_offset": 2048, 00:15:52.113 "data_size": 63488 00:15:52.113 } 00:15:52.113 ] 00:15:52.113 } 00:15:52.113 } 00:15:52.113 }' 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:52.113 pt2 00:15:52.113 pt3 00:15:52.113 pt4' 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.113 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.372 [2024-10-30 10:44:13.794136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 16479918-dd78-471c-a8b0-ba1d6252aa82 '!=' 16479918-dd78-471c-a8b0-ba1d6252aa82 ']' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72928 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72928 ']' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72928 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:52.372 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72928 00:15:52.630 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:52.630 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:52.630 killing process with pid 72928 00:15:52.630 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72928' 00:15:52.630 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72928 00:15:52.630 [2024-10-30 10:44:13.863816] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.630 10:44:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72928 00:15:52.631 [2024-10-30 10:44:13.863914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.631 [2024-10-30 10:44:13.864019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.631 [2024-10-30 10:44:13.864036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:52.889 [2024-10-30 10:44:14.207087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.828 10:44:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:53.828 00:15:53.828 real 0m5.951s 00:15:53.828 user 0m8.999s 00:15:53.828 sys 0m0.872s 00:15:53.828 10:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:53.828 10:44:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 ************************************ 00:15:53.828 END TEST raid_superblock_test 00:15:53.828 ************************************ 00:15:53.828 10:44:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:53.828 10:44:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:53.828 10:44:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:53.828 10:44:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.828 ************************************ 00:15:53.828 START TEST raid_read_error_test 00:15:53.828 ************************************ 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:53.828 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7XQMOy2uUd 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73198 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73198 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73198 ']' 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.829 10:44:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.097 [2024-10-30 10:44:15.396005] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:15:54.097 [2024-10-30 10:44:15.396177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73198 ] 00:15:54.356 [2024-10-30 10:44:15.581126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.356 [2024-10-30 10:44:15.710510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.614 [2024-10-30 10:44:15.915951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.614 [2024-10-30 10:44:15.916073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.180 BaseBdev1_malloc 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.180 true 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.180 [2024-10-30 10:44:16.464751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:55.180 [2024-10-30 10:44:16.464812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.180 [2024-10-30 10:44:16.464841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:55.180 [2024-10-30 10:44:16.464859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.180 [2024-10-30 10:44:16.467930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.180 [2024-10-30 10:44:16.468006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.180 BaseBdev1 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.180 BaseBdev2_malloc 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:55.180 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.181 true 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.181 [2024-10-30 10:44:16.522312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:55.181 [2024-10-30 10:44:16.522406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.181 [2024-10-30 10:44:16.522430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:55.181 [2024-10-30 10:44:16.522447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.181 [2024-10-30 10:44:16.525390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.181 [2024-10-30 10:44:16.525446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.181 BaseBdev2 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.181 BaseBdev3_malloc 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.181 true 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.181 [2024-10-30 10:44:16.592205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:55.181 [2024-10-30 10:44:16.592268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.181 [2024-10-30 10:44:16.592294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:55.181 [2024-10-30 10:44:16.592312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.181 [2024-10-30 10:44:16.595142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.181 [2024-10-30 10:44:16.595186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:55.181 BaseBdev3 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.181 BaseBdev4_malloc 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.181 true 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.181 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.439 [2024-10-30 10:44:16.650429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:55.440 [2024-10-30 10:44:16.650507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.440 [2024-10-30 10:44:16.650533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:55.440 [2024-10-30 10:44:16.650550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.440 [2024-10-30 10:44:16.653444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.440 [2024-10-30 10:44:16.653506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:55.440 BaseBdev4 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.440 [2024-10-30 10:44:16.658494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.440 [2024-10-30 10:44:16.660995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.440 [2024-10-30 10:44:16.661136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:55.440 [2024-10-30 10:44:16.661245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.440 [2024-10-30 10:44:16.661540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:55.440 [2024-10-30 10:44:16.661573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:55.440 [2024-10-30 10:44:16.661874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:55.440 [2024-10-30 10:44:16.662113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:55.440 [2024-10-30 10:44:16.662139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:55.440 [2024-10-30 10:44:16.662324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.440 "name": "raid_bdev1", 00:15:55.440 "uuid": "471368e5-5065-4898-8d84-49ba596451c6", 00:15:55.440 "strip_size_kb": 64, 00:15:55.440 "state": "online", 00:15:55.440 "raid_level": "concat", 00:15:55.440 "superblock": true, 00:15:55.440 "num_base_bdevs": 4, 00:15:55.440 "num_base_bdevs_discovered": 4, 00:15:55.440 "num_base_bdevs_operational": 4, 00:15:55.440 "base_bdevs_list": [ 00:15:55.440 { 00:15:55.440 "name": "BaseBdev1", 00:15:55.440 "uuid": "5a804d41-7710-5125-85cc-fe56173274f3", 00:15:55.440 "is_configured": true, 00:15:55.440 "data_offset": 2048, 00:15:55.440 "data_size": 63488 00:15:55.440 }, 00:15:55.440 { 00:15:55.440 "name": "BaseBdev2", 00:15:55.440 "uuid": "783c857e-e722-5162-ae53-d30405a5d63e", 00:15:55.440 "is_configured": true, 00:15:55.440 "data_offset": 2048, 00:15:55.440 "data_size": 63488 00:15:55.440 }, 00:15:55.440 { 00:15:55.440 "name": "BaseBdev3", 00:15:55.440 "uuid": "45ce08fd-a93c-5dc7-96e1-8301610b1cbd", 00:15:55.440 "is_configured": true, 00:15:55.440 "data_offset": 2048, 00:15:55.440 "data_size": 63488 00:15:55.440 }, 00:15:55.440 { 00:15:55.440 "name": "BaseBdev4", 00:15:55.440 "uuid": "4de13375-eb38-5f4f-a99b-2f368b810574", 00:15:55.440 "is_configured": true, 00:15:55.440 "data_offset": 2048, 00:15:55.440 "data_size": 63488 00:15:55.440 } 00:15:55.440 ] 00:15:55.440 }' 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.440 10:44:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.006 10:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:56.006 10:44:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:56.006 [2024-10-30 10:44:17.332158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.941 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.941 "name": "raid_bdev1", 00:15:56.941 "uuid": "471368e5-5065-4898-8d84-49ba596451c6", 00:15:56.941 "strip_size_kb": 64, 00:15:56.942 "state": "online", 00:15:56.942 "raid_level": "concat", 00:15:56.942 "superblock": true, 00:15:56.942 "num_base_bdevs": 4, 00:15:56.942 "num_base_bdevs_discovered": 4, 00:15:56.942 "num_base_bdevs_operational": 4, 00:15:56.942 "base_bdevs_list": [ 00:15:56.942 { 00:15:56.942 "name": "BaseBdev1", 00:15:56.942 "uuid": "5a804d41-7710-5125-85cc-fe56173274f3", 00:15:56.942 "is_configured": true, 00:15:56.942 "data_offset": 2048, 00:15:56.942 "data_size": 63488 00:15:56.942 }, 00:15:56.942 { 00:15:56.942 "name": "BaseBdev2", 00:15:56.942 "uuid": "783c857e-e722-5162-ae53-d30405a5d63e", 00:15:56.942 "is_configured": true, 00:15:56.942 "data_offset": 2048, 00:15:56.942 "data_size": 63488 00:15:56.942 }, 00:15:56.942 { 00:15:56.942 "name": "BaseBdev3", 00:15:56.942 "uuid": "45ce08fd-a93c-5dc7-96e1-8301610b1cbd", 00:15:56.942 "is_configured": true, 00:15:56.942 "data_offset": 2048, 00:15:56.942 "data_size": 63488 00:15:56.942 }, 00:15:56.942 { 00:15:56.942 "name": "BaseBdev4", 00:15:56.942 "uuid": "4de13375-eb38-5f4f-a99b-2f368b810574", 00:15:56.942 "is_configured": true, 00:15:56.942 "data_offset": 2048, 00:15:56.942 "data_size": 63488 00:15:56.942 } 00:15:56.942 ] 00:15:56.942 }' 00:15:56.942 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.942 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.508 [2024-10-30 10:44:18.738428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.508 [2024-10-30 10:44:18.738486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.508 [2024-10-30 10:44:18.741972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.508 [2024-10-30 10:44:18.742084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.508 [2024-10-30 10:44:18.742144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.508 [2024-10-30 10:44:18.742165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:57.508 { 00:15:57.508 "results": [ 00:15:57.508 { 00:15:57.508 "job": "raid_bdev1", 00:15:57.508 "core_mask": "0x1", 00:15:57.508 "workload": "randrw", 00:15:57.508 "percentage": 50, 00:15:57.508 "status": "finished", 00:15:57.508 "queue_depth": 1, 00:15:57.508 "io_size": 131072, 00:15:57.508 "runtime": 1.403856, 00:15:57.508 "iops": 10704.089308305125, 00:15:57.508 "mibps": 1338.0111635381406, 00:15:57.508 "io_failed": 1, 00:15:57.508 "io_timeout": 0, 00:15:57.508 "avg_latency_us": 130.14828272073947, 00:15:57.508 "min_latency_us": 38.4, 00:15:57.508 "max_latency_us": 1861.8181818181818 00:15:57.508 } 00:15:57.508 ], 00:15:57.508 "core_count": 1 00:15:57.508 } 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73198 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73198 ']' 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73198 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73198 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:57.508 killing process with pid 73198 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73198' 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73198 00:15:57.508 [2024-10-30 10:44:18.775881] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.508 10:44:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73198 00:15:57.767 [2024-10-30 10:44:19.076859] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7XQMOy2uUd 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:58.713 00:15:58.713 real 0m4.895s 00:15:58.713 user 0m6.081s 00:15:58.713 sys 0m0.594s 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:58.713 10:44:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.713 ************************************ 00:15:58.713 END TEST raid_read_error_test 00:15:58.713 ************************************ 00:15:58.973 10:44:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:58.973 10:44:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:58.973 10:44:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:58.973 10:44:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.973 ************************************ 00:15:58.973 START TEST raid_write_error_test 00:15:58.973 ************************************ 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fskFvi9Jc1 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73344 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73344 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73344 ']' 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:58.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:58.973 10:44:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.973 [2024-10-30 10:44:20.344714] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:15:58.973 [2024-10-30 10:44:20.344942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73344 ] 00:15:59.232 [2024-10-30 10:44:20.534259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.491 [2024-10-30 10:44:20.717168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.491 [2024-10-30 10:44:20.909869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.491 [2024-10-30 10:44:20.909962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.059 BaseBdev1_malloc 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.059 true 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.059 [2024-10-30 10:44:21.368873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:00.059 [2024-10-30 10:44:21.368948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.059 [2024-10-30 10:44:21.368990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:00.059 [2024-10-30 10:44:21.369011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.059 [2024-10-30 10:44:21.371783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.059 [2024-10-30 10:44:21.371839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:00.059 BaseBdev1 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.059 BaseBdev2_malloc 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.059 true 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.059 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.060 [2024-10-30 10:44:21.424646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:00.060 [2024-10-30 10:44:21.424716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.060 [2024-10-30 10:44:21.424741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:00.060 [2024-10-30 10:44:21.424758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.060 [2024-10-30 10:44:21.427517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.060 [2024-10-30 10:44:21.427583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:00.060 BaseBdev2 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.060 BaseBdev3_malloc 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.060 true 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.060 [2024-10-30 10:44:21.490705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:00.060 [2024-10-30 10:44:21.490774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.060 [2024-10-30 10:44:21.490801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:00.060 [2024-10-30 10:44:21.490819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.060 [2024-10-30 10:44:21.493635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.060 [2024-10-30 10:44:21.493699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:00.060 BaseBdev3 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.060 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.319 BaseBdev4_malloc 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.319 true 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.319 [2024-10-30 10:44:21.545935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:00.319 [2024-10-30 10:44:21.546013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.319 [2024-10-30 10:44:21.546041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:00.319 [2024-10-30 10:44:21.546059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.319 [2024-10-30 10:44:21.548856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.319 [2024-10-30 10:44:21.548939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:00.319 BaseBdev4 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:00.319 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.320 [2024-10-30 10:44:21.554058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.320 [2024-10-30 10:44:21.556503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.320 [2024-10-30 10:44:21.556611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:00.320 [2024-10-30 10:44:21.556715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:00.320 [2024-10-30 10:44:21.557019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:00.320 [2024-10-30 10:44:21.557051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:00.320 [2024-10-30 10:44:21.557361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:00.320 [2024-10-30 10:44:21.557579] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:00.320 [2024-10-30 10:44:21.557606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:00.320 [2024-10-30 10:44:21.557793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.320 "name": "raid_bdev1", 00:16:00.320 "uuid": "7dbb6d72-3fb3-4568-bed8-a47a8bbd61b9", 00:16:00.320 "strip_size_kb": 64, 00:16:00.320 "state": "online", 00:16:00.320 "raid_level": "concat", 00:16:00.320 "superblock": true, 00:16:00.320 "num_base_bdevs": 4, 00:16:00.320 "num_base_bdevs_discovered": 4, 00:16:00.320 "num_base_bdevs_operational": 4, 00:16:00.320 "base_bdevs_list": [ 00:16:00.320 { 00:16:00.320 "name": "BaseBdev1", 00:16:00.320 "uuid": "db927a4b-3e4f-5085-a675-d49706478062", 00:16:00.320 "is_configured": true, 00:16:00.320 "data_offset": 2048, 00:16:00.320 "data_size": 63488 00:16:00.320 }, 00:16:00.320 { 00:16:00.320 "name": "BaseBdev2", 00:16:00.320 "uuid": "39bba2fc-edd3-562e-8cab-153bd6ba47b5", 00:16:00.320 "is_configured": true, 00:16:00.320 "data_offset": 2048, 00:16:00.320 "data_size": 63488 00:16:00.320 }, 00:16:00.320 { 00:16:00.320 "name": "BaseBdev3", 00:16:00.320 "uuid": "52396e1a-ec68-537a-a204-495996068b75", 00:16:00.320 "is_configured": true, 00:16:00.320 "data_offset": 2048, 00:16:00.320 "data_size": 63488 00:16:00.320 }, 00:16:00.320 { 00:16:00.320 "name": "BaseBdev4", 00:16:00.320 "uuid": "e501d930-499c-5a47-8bf2-9e4a58a27d3d", 00:16:00.320 "is_configured": true, 00:16:00.320 "data_offset": 2048, 00:16:00.320 "data_size": 63488 00:16:00.320 } 00:16:00.320 ] 00:16:00.320 }' 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.320 10:44:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.888 10:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:00.888 10:44:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:00.888 [2024-10-30 10:44:22.207619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.824 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.824 "name": "raid_bdev1", 00:16:01.824 "uuid": "7dbb6d72-3fb3-4568-bed8-a47a8bbd61b9", 00:16:01.824 "strip_size_kb": 64, 00:16:01.824 "state": "online", 00:16:01.824 "raid_level": "concat", 00:16:01.824 "superblock": true, 00:16:01.824 "num_base_bdevs": 4, 00:16:01.824 "num_base_bdevs_discovered": 4, 00:16:01.824 "num_base_bdevs_operational": 4, 00:16:01.824 "base_bdevs_list": [ 00:16:01.824 { 00:16:01.824 "name": "BaseBdev1", 00:16:01.824 "uuid": "db927a4b-3e4f-5085-a675-d49706478062", 00:16:01.824 "is_configured": true, 00:16:01.824 "data_offset": 2048, 00:16:01.824 "data_size": 63488 00:16:01.824 }, 00:16:01.824 { 00:16:01.824 "name": "BaseBdev2", 00:16:01.824 "uuid": "39bba2fc-edd3-562e-8cab-153bd6ba47b5", 00:16:01.824 "is_configured": true, 00:16:01.824 "data_offset": 2048, 00:16:01.824 "data_size": 63488 00:16:01.824 }, 00:16:01.824 { 00:16:01.824 "name": "BaseBdev3", 00:16:01.824 "uuid": "52396e1a-ec68-537a-a204-495996068b75", 00:16:01.824 "is_configured": true, 00:16:01.824 "data_offset": 2048, 00:16:01.824 "data_size": 63488 00:16:01.824 }, 00:16:01.824 { 00:16:01.824 "name": "BaseBdev4", 00:16:01.824 "uuid": "e501d930-499c-5a47-8bf2-9e4a58a27d3d", 00:16:01.824 "is_configured": true, 00:16:01.824 "data_offset": 2048, 00:16:01.824 "data_size": 63488 00:16:01.824 } 00:16:01.824 ] 00:16:01.824 }' 00:16:01.825 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.825 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.393 [2024-10-30 10:44:23.593322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.393 [2024-10-30 10:44:23.593363] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.393 [2024-10-30 10:44:23.596662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.393 [2024-10-30 10:44:23.596745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.393 [2024-10-30 10:44:23.596808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.393 [2024-10-30 10:44:23.596839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.393 { 00:16:02.393 "results": [ 00:16:02.393 { 00:16:02.393 "job": "raid_bdev1", 00:16:02.393 "core_mask": "0x1", 00:16:02.393 "workload": "randrw", 00:16:02.393 "percentage": 50, 00:16:02.393 "status": "finished", 00:16:02.393 "queue_depth": 1, 00:16:02.393 "io_size": 131072, 00:16:02.393 "runtime": 1.38332, 00:16:02.393 "iops": 10834.803227019056, 00:16:02.393 "mibps": 1354.350403377382, 00:16:02.393 "io_failed": 1, 00:16:02.393 "io_timeout": 0, 00:16:02.393 "avg_latency_us": 128.3074254453266, 00:16:02.393 "min_latency_us": 38.63272727272727, 00:16:02.393 "max_latency_us": 1891.6072727272726 00:16:02.393 } 00:16:02.393 ], 00:16:02.393 "core_count": 1 00:16:02.393 } 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73344 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73344 ']' 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73344 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73344 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73344' 00:16:02.393 killing process with pid 73344 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73344 00:16:02.393 [2024-10-30 10:44:23.629414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.393 10:44:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73344 00:16:02.652 [2024-10-30 10:44:23.910592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fskFvi9Jc1 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:16:03.590 00:16:03.590 real 0m4.765s 00:16:03.590 user 0m5.872s 00:16:03.590 sys 0m0.601s 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:03.590 10:44:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.590 ************************************ 00:16:03.590 END TEST raid_write_error_test 00:16:03.590 ************************************ 00:16:03.590 10:44:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:03.590 10:44:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:16:03.590 10:44:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:03.590 10:44:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:03.590 10:44:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.590 ************************************ 00:16:03.590 START TEST raid_state_function_test 00:16:03.590 ************************************ 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73488 00:16:03.590 Process raid pid: 73488 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73488' 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73488 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73488 ']' 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:03.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:03.590 10:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.849 [2024-10-30 10:44:25.162493] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:16:03.849 [2024-10-30 10:44:25.162686] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.109 [2024-10-30 10:44:25.350038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.109 [2024-10-30 10:44:25.480085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.383 [2024-10-30 10:44:25.690054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.383 [2024-10-30 10:44:25.690092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.972 [2024-10-30 10:44:26.150947] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.972 [2024-10-30 10:44:26.151033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.972 [2024-10-30 10:44:26.151064] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.972 [2024-10-30 10:44:26.151082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.972 [2024-10-30 10:44:26.151093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.972 [2024-10-30 10:44:26.151108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.972 [2024-10-30 10:44:26.151118] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.972 [2024-10-30 10:44:26.151132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.972 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.973 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.973 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.973 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.973 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.973 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.973 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.973 "name": "Existed_Raid", 00:16:04.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.973 "strip_size_kb": 0, 00:16:04.973 "state": "configuring", 00:16:04.973 "raid_level": "raid1", 00:16:04.973 "superblock": false, 00:16:04.973 "num_base_bdevs": 4, 00:16:04.973 "num_base_bdevs_discovered": 0, 00:16:04.973 "num_base_bdevs_operational": 4, 00:16:04.973 "base_bdevs_list": [ 00:16:04.973 { 00:16:04.973 "name": "BaseBdev1", 00:16:04.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.973 "is_configured": false, 00:16:04.973 "data_offset": 0, 00:16:04.973 "data_size": 0 00:16:04.973 }, 00:16:04.973 { 00:16:04.973 "name": "BaseBdev2", 00:16:04.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.973 "is_configured": false, 00:16:04.973 "data_offset": 0, 00:16:04.973 "data_size": 0 00:16:04.973 }, 00:16:04.973 { 00:16:04.973 "name": "BaseBdev3", 00:16:04.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.973 "is_configured": false, 00:16:04.973 "data_offset": 0, 00:16:04.973 "data_size": 0 00:16:04.973 }, 00:16:04.973 { 00:16:04.973 "name": "BaseBdev4", 00:16:04.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.973 "is_configured": false, 00:16:04.973 "data_offset": 0, 00:16:04.973 "data_size": 0 00:16:04.973 } 00:16:04.973 ] 00:16:04.973 }' 00:16:04.973 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.973 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 [2024-10-30 10:44:26.667077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.231 [2024-10-30 10:44:26.667145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 [2024-10-30 10:44:26.675021] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.231 [2024-10-30 10:44:26.675508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.231 [2024-10-30 10:44:26.675543] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.231 [2024-10-30 10:44:26.675641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.231 [2024-10-30 10:44:26.675659] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.231 [2024-10-30 10:44:26.675763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.231 [2024-10-30 10:44:26.675784] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.231 [2024-10-30 10:44:26.675883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.231 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.497 [2024-10-30 10:44:26.720119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.497 BaseBdev1 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.497 [ 00:16:05.497 { 00:16:05.497 "name": "BaseBdev1", 00:16:05.497 "aliases": [ 00:16:05.497 "3451a6ea-214b-4420-8e6d-97eb591de5ed" 00:16:05.497 ], 00:16:05.497 "product_name": "Malloc disk", 00:16:05.497 "block_size": 512, 00:16:05.497 "num_blocks": 65536, 00:16:05.497 "uuid": "3451a6ea-214b-4420-8e6d-97eb591de5ed", 00:16:05.497 "assigned_rate_limits": { 00:16:05.497 "rw_ios_per_sec": 0, 00:16:05.497 "rw_mbytes_per_sec": 0, 00:16:05.497 "r_mbytes_per_sec": 0, 00:16:05.497 "w_mbytes_per_sec": 0 00:16:05.497 }, 00:16:05.497 "claimed": true, 00:16:05.497 "claim_type": "exclusive_write", 00:16:05.497 "zoned": false, 00:16:05.497 "supported_io_types": { 00:16:05.497 "read": true, 00:16:05.497 "write": true, 00:16:05.497 "unmap": true, 00:16:05.497 "flush": true, 00:16:05.497 "reset": true, 00:16:05.497 "nvme_admin": false, 00:16:05.497 "nvme_io": false, 00:16:05.497 "nvme_io_md": false, 00:16:05.497 "write_zeroes": true, 00:16:05.497 "zcopy": true, 00:16:05.497 "get_zone_info": false, 00:16:05.497 "zone_management": false, 00:16:05.497 "zone_append": false, 00:16:05.497 "compare": false, 00:16:05.497 "compare_and_write": false, 00:16:05.497 "abort": true, 00:16:05.497 "seek_hole": false, 00:16:05.497 "seek_data": false, 00:16:05.497 "copy": true, 00:16:05.497 "nvme_iov_md": false 00:16:05.497 }, 00:16:05.497 "memory_domains": [ 00:16:05.497 { 00:16:05.497 "dma_device_id": "system", 00:16:05.497 "dma_device_type": 1 00:16:05.497 }, 00:16:05.497 { 00:16:05.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.497 "dma_device_type": 2 00:16:05.497 } 00:16:05.497 ], 00:16:05.497 "driver_specific": {} 00:16:05.497 } 00:16:05.497 ] 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.497 "name": "Existed_Raid", 00:16:05.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.497 "strip_size_kb": 0, 00:16:05.497 "state": "configuring", 00:16:05.497 "raid_level": "raid1", 00:16:05.497 "superblock": false, 00:16:05.497 "num_base_bdevs": 4, 00:16:05.497 "num_base_bdevs_discovered": 1, 00:16:05.497 "num_base_bdevs_operational": 4, 00:16:05.497 "base_bdevs_list": [ 00:16:05.497 { 00:16:05.497 "name": "BaseBdev1", 00:16:05.497 "uuid": "3451a6ea-214b-4420-8e6d-97eb591de5ed", 00:16:05.497 "is_configured": true, 00:16:05.497 "data_offset": 0, 00:16:05.497 "data_size": 65536 00:16:05.497 }, 00:16:05.497 { 00:16:05.497 "name": "BaseBdev2", 00:16:05.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.497 "is_configured": false, 00:16:05.497 "data_offset": 0, 00:16:05.497 "data_size": 0 00:16:05.497 }, 00:16:05.497 { 00:16:05.497 "name": "BaseBdev3", 00:16:05.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.497 "is_configured": false, 00:16:05.497 "data_offset": 0, 00:16:05.497 "data_size": 0 00:16:05.497 }, 00:16:05.497 { 00:16:05.497 "name": "BaseBdev4", 00:16:05.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.497 "is_configured": false, 00:16:05.497 "data_offset": 0, 00:16:05.497 "data_size": 0 00:16:05.497 } 00:16:05.497 ] 00:16:05.497 }' 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.497 10:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.066 [2024-10-30 10:44:27.248302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.066 [2024-10-30 10:44:27.248427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.066 [2024-10-30 10:44:27.256373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.066 [2024-10-30 10:44:27.258822] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.066 [2024-10-30 10:44:27.259130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.066 [2024-10-30 10:44:27.259157] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:06.066 [2024-10-30 10:44:27.259288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:06.066 [2024-10-30 10:44:27.259312] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:06.066 [2024-10-30 10:44:27.259420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.066 "name": "Existed_Raid", 00:16:06.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.066 "strip_size_kb": 0, 00:16:06.066 "state": "configuring", 00:16:06.066 "raid_level": "raid1", 00:16:06.066 "superblock": false, 00:16:06.066 "num_base_bdevs": 4, 00:16:06.066 "num_base_bdevs_discovered": 1, 00:16:06.066 "num_base_bdevs_operational": 4, 00:16:06.066 "base_bdevs_list": [ 00:16:06.066 { 00:16:06.066 "name": "BaseBdev1", 00:16:06.066 "uuid": "3451a6ea-214b-4420-8e6d-97eb591de5ed", 00:16:06.066 "is_configured": true, 00:16:06.066 "data_offset": 0, 00:16:06.066 "data_size": 65536 00:16:06.066 }, 00:16:06.066 { 00:16:06.066 "name": "BaseBdev2", 00:16:06.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.066 "is_configured": false, 00:16:06.066 "data_offset": 0, 00:16:06.066 "data_size": 0 00:16:06.066 }, 00:16:06.066 { 00:16:06.066 "name": "BaseBdev3", 00:16:06.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.066 "is_configured": false, 00:16:06.066 "data_offset": 0, 00:16:06.066 "data_size": 0 00:16:06.066 }, 00:16:06.066 { 00:16:06.066 "name": "BaseBdev4", 00:16:06.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.066 "is_configured": false, 00:16:06.066 "data_offset": 0, 00:16:06.066 "data_size": 0 00:16:06.066 } 00:16:06.066 ] 00:16:06.066 }' 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.066 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.325 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:06.325 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.325 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.585 [2024-10-30 10:44:27.807015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.585 BaseBdev2 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.585 [ 00:16:06.585 { 00:16:06.585 "name": "BaseBdev2", 00:16:06.585 "aliases": [ 00:16:06.585 "6f5c1dc7-bff2-4130-9c92-0811bf60a3ed" 00:16:06.585 ], 00:16:06.585 "product_name": "Malloc disk", 00:16:06.585 "block_size": 512, 00:16:06.585 "num_blocks": 65536, 00:16:06.585 "uuid": "6f5c1dc7-bff2-4130-9c92-0811bf60a3ed", 00:16:06.585 "assigned_rate_limits": { 00:16:06.585 "rw_ios_per_sec": 0, 00:16:06.585 "rw_mbytes_per_sec": 0, 00:16:06.585 "r_mbytes_per_sec": 0, 00:16:06.585 "w_mbytes_per_sec": 0 00:16:06.585 }, 00:16:06.585 "claimed": true, 00:16:06.585 "claim_type": "exclusive_write", 00:16:06.585 "zoned": false, 00:16:06.585 "supported_io_types": { 00:16:06.585 "read": true, 00:16:06.585 "write": true, 00:16:06.585 "unmap": true, 00:16:06.585 "flush": true, 00:16:06.585 "reset": true, 00:16:06.585 "nvme_admin": false, 00:16:06.585 "nvme_io": false, 00:16:06.585 "nvme_io_md": false, 00:16:06.585 "write_zeroes": true, 00:16:06.585 "zcopy": true, 00:16:06.585 "get_zone_info": false, 00:16:06.585 "zone_management": false, 00:16:06.585 "zone_append": false, 00:16:06.585 "compare": false, 00:16:06.585 "compare_and_write": false, 00:16:06.585 "abort": true, 00:16:06.585 "seek_hole": false, 00:16:06.585 "seek_data": false, 00:16:06.585 "copy": true, 00:16:06.585 "nvme_iov_md": false 00:16:06.585 }, 00:16:06.585 "memory_domains": [ 00:16:06.585 { 00:16:06.585 "dma_device_id": "system", 00:16:06.585 "dma_device_type": 1 00:16:06.585 }, 00:16:06.585 { 00:16:06.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.585 "dma_device_type": 2 00:16:06.585 } 00:16:06.585 ], 00:16:06.585 "driver_specific": {} 00:16:06.585 } 00:16:06.585 ] 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.585 "name": "Existed_Raid", 00:16:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.585 "strip_size_kb": 0, 00:16:06.585 "state": "configuring", 00:16:06.585 "raid_level": "raid1", 00:16:06.585 "superblock": false, 00:16:06.585 "num_base_bdevs": 4, 00:16:06.585 "num_base_bdevs_discovered": 2, 00:16:06.585 "num_base_bdevs_operational": 4, 00:16:06.585 "base_bdevs_list": [ 00:16:06.585 { 00:16:06.585 "name": "BaseBdev1", 00:16:06.585 "uuid": "3451a6ea-214b-4420-8e6d-97eb591de5ed", 00:16:06.585 "is_configured": true, 00:16:06.585 "data_offset": 0, 00:16:06.585 "data_size": 65536 00:16:06.585 }, 00:16:06.585 { 00:16:06.585 "name": "BaseBdev2", 00:16:06.585 "uuid": "6f5c1dc7-bff2-4130-9c92-0811bf60a3ed", 00:16:06.585 "is_configured": true, 00:16:06.585 "data_offset": 0, 00:16:06.585 "data_size": 65536 00:16:06.585 }, 00:16:06.585 { 00:16:06.585 "name": "BaseBdev3", 00:16:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.585 "is_configured": false, 00:16:06.585 "data_offset": 0, 00:16:06.585 "data_size": 0 00:16:06.585 }, 00:16:06.585 { 00:16:06.585 "name": "BaseBdev4", 00:16:06.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.585 "is_configured": false, 00:16:06.585 "data_offset": 0, 00:16:06.585 "data_size": 0 00:16:06.585 } 00:16:06.585 ] 00:16:06.585 }' 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.585 10:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.153 [2024-10-30 10:44:28.407333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.153 BaseBdev3 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.153 [ 00:16:07.153 { 00:16:07.153 "name": "BaseBdev3", 00:16:07.153 "aliases": [ 00:16:07.153 "dc06403e-abe0-44c5-8c41-4b6a3a49b7b5" 00:16:07.153 ], 00:16:07.153 "product_name": "Malloc disk", 00:16:07.153 "block_size": 512, 00:16:07.153 "num_blocks": 65536, 00:16:07.153 "uuid": "dc06403e-abe0-44c5-8c41-4b6a3a49b7b5", 00:16:07.153 "assigned_rate_limits": { 00:16:07.153 "rw_ios_per_sec": 0, 00:16:07.153 "rw_mbytes_per_sec": 0, 00:16:07.153 "r_mbytes_per_sec": 0, 00:16:07.153 "w_mbytes_per_sec": 0 00:16:07.153 }, 00:16:07.153 "claimed": true, 00:16:07.153 "claim_type": "exclusive_write", 00:16:07.153 "zoned": false, 00:16:07.153 "supported_io_types": { 00:16:07.153 "read": true, 00:16:07.153 "write": true, 00:16:07.153 "unmap": true, 00:16:07.153 "flush": true, 00:16:07.153 "reset": true, 00:16:07.153 "nvme_admin": false, 00:16:07.153 "nvme_io": false, 00:16:07.153 "nvme_io_md": false, 00:16:07.153 "write_zeroes": true, 00:16:07.153 "zcopy": true, 00:16:07.153 "get_zone_info": false, 00:16:07.153 "zone_management": false, 00:16:07.153 "zone_append": false, 00:16:07.153 "compare": false, 00:16:07.153 "compare_and_write": false, 00:16:07.153 "abort": true, 00:16:07.153 "seek_hole": false, 00:16:07.153 "seek_data": false, 00:16:07.153 "copy": true, 00:16:07.153 "nvme_iov_md": false 00:16:07.153 }, 00:16:07.153 "memory_domains": [ 00:16:07.153 { 00:16:07.153 "dma_device_id": "system", 00:16:07.153 "dma_device_type": 1 00:16:07.153 }, 00:16:07.153 { 00:16:07.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.153 "dma_device_type": 2 00:16:07.153 } 00:16:07.153 ], 00:16:07.153 "driver_specific": {} 00:16:07.153 } 00:16:07.153 ] 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.153 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.153 "name": "Existed_Raid", 00:16:07.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.153 "strip_size_kb": 0, 00:16:07.153 "state": "configuring", 00:16:07.154 "raid_level": "raid1", 00:16:07.154 "superblock": false, 00:16:07.154 "num_base_bdevs": 4, 00:16:07.154 "num_base_bdevs_discovered": 3, 00:16:07.154 "num_base_bdevs_operational": 4, 00:16:07.154 "base_bdevs_list": [ 00:16:07.154 { 00:16:07.154 "name": "BaseBdev1", 00:16:07.154 "uuid": "3451a6ea-214b-4420-8e6d-97eb591de5ed", 00:16:07.154 "is_configured": true, 00:16:07.154 "data_offset": 0, 00:16:07.154 "data_size": 65536 00:16:07.154 }, 00:16:07.154 { 00:16:07.154 "name": "BaseBdev2", 00:16:07.154 "uuid": "6f5c1dc7-bff2-4130-9c92-0811bf60a3ed", 00:16:07.154 "is_configured": true, 00:16:07.154 "data_offset": 0, 00:16:07.154 "data_size": 65536 00:16:07.154 }, 00:16:07.154 { 00:16:07.154 "name": "BaseBdev3", 00:16:07.154 "uuid": "dc06403e-abe0-44c5-8c41-4b6a3a49b7b5", 00:16:07.154 "is_configured": true, 00:16:07.154 "data_offset": 0, 00:16:07.154 "data_size": 65536 00:16:07.154 }, 00:16:07.154 { 00:16:07.154 "name": "BaseBdev4", 00:16:07.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.154 "is_configured": false, 00:16:07.154 "data_offset": 0, 00:16:07.154 "data_size": 0 00:16:07.154 } 00:16:07.154 ] 00:16:07.154 }' 00:16:07.154 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.154 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.720 [2024-10-30 10:44:28.994597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:07.720 [2024-10-30 10:44:28.994663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:07.720 [2024-10-30 10:44:28.994677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:07.720 [2024-10-30 10:44:28.995121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:07.720 [2024-10-30 10:44:28.995400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:07.720 [2024-10-30 10:44:28.995455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:07.720 [2024-10-30 10:44:28.995868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.720 BaseBdev4 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.720 10:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.720 [ 00:16:07.720 { 00:16:07.720 "name": "BaseBdev4", 00:16:07.720 "aliases": [ 00:16:07.720 "70a7d3b0-87fa-47e4-a238-2fbb60054d67" 00:16:07.720 ], 00:16:07.720 "product_name": "Malloc disk", 00:16:07.720 "block_size": 512, 00:16:07.720 "num_blocks": 65536, 00:16:07.720 "uuid": "70a7d3b0-87fa-47e4-a238-2fbb60054d67", 00:16:07.720 "assigned_rate_limits": { 00:16:07.720 "rw_ios_per_sec": 0, 00:16:07.720 "rw_mbytes_per_sec": 0, 00:16:07.720 "r_mbytes_per_sec": 0, 00:16:07.720 "w_mbytes_per_sec": 0 00:16:07.720 }, 00:16:07.720 "claimed": true, 00:16:07.720 "claim_type": "exclusive_write", 00:16:07.720 "zoned": false, 00:16:07.720 "supported_io_types": { 00:16:07.720 "read": true, 00:16:07.720 "write": true, 00:16:07.720 "unmap": true, 00:16:07.720 "flush": true, 00:16:07.720 "reset": true, 00:16:07.720 "nvme_admin": false, 00:16:07.720 "nvme_io": false, 00:16:07.720 "nvme_io_md": false, 00:16:07.720 "write_zeroes": true, 00:16:07.720 "zcopy": true, 00:16:07.720 "get_zone_info": false, 00:16:07.720 "zone_management": false, 00:16:07.720 "zone_append": false, 00:16:07.720 "compare": false, 00:16:07.720 "compare_and_write": false, 00:16:07.720 "abort": true, 00:16:07.720 "seek_hole": false, 00:16:07.720 "seek_data": false, 00:16:07.720 "copy": true, 00:16:07.720 "nvme_iov_md": false 00:16:07.720 }, 00:16:07.720 "memory_domains": [ 00:16:07.720 { 00:16:07.720 "dma_device_id": "system", 00:16:07.720 "dma_device_type": 1 00:16:07.720 }, 00:16:07.720 { 00:16:07.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.720 "dma_device_type": 2 00:16:07.720 } 00:16:07.720 ], 00:16:07.720 "driver_specific": {} 00:16:07.720 } 00:16:07.720 ] 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.720 "name": "Existed_Raid", 00:16:07.720 "uuid": "38fb0b10-ea3e-45d8-8526-4db82a41bea4", 00:16:07.720 "strip_size_kb": 0, 00:16:07.720 "state": "online", 00:16:07.720 "raid_level": "raid1", 00:16:07.720 "superblock": false, 00:16:07.720 "num_base_bdevs": 4, 00:16:07.720 "num_base_bdevs_discovered": 4, 00:16:07.720 "num_base_bdevs_operational": 4, 00:16:07.720 "base_bdevs_list": [ 00:16:07.720 { 00:16:07.720 "name": "BaseBdev1", 00:16:07.720 "uuid": "3451a6ea-214b-4420-8e6d-97eb591de5ed", 00:16:07.720 "is_configured": true, 00:16:07.720 "data_offset": 0, 00:16:07.720 "data_size": 65536 00:16:07.720 }, 00:16:07.720 { 00:16:07.720 "name": "BaseBdev2", 00:16:07.720 "uuid": "6f5c1dc7-bff2-4130-9c92-0811bf60a3ed", 00:16:07.720 "is_configured": true, 00:16:07.720 "data_offset": 0, 00:16:07.720 "data_size": 65536 00:16:07.720 }, 00:16:07.720 { 00:16:07.720 "name": "BaseBdev3", 00:16:07.720 "uuid": "dc06403e-abe0-44c5-8c41-4b6a3a49b7b5", 00:16:07.720 "is_configured": true, 00:16:07.720 "data_offset": 0, 00:16:07.720 "data_size": 65536 00:16:07.720 }, 00:16:07.720 { 00:16:07.720 "name": "BaseBdev4", 00:16:07.720 "uuid": "70a7d3b0-87fa-47e4-a238-2fbb60054d67", 00:16:07.720 "is_configured": true, 00:16:07.720 "data_offset": 0, 00:16:07.720 "data_size": 65536 00:16:07.720 } 00:16:07.720 ] 00:16:07.720 }' 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.720 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.289 [2024-10-30 10:44:29.531290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:08.289 "name": "Existed_Raid", 00:16:08.289 "aliases": [ 00:16:08.289 "38fb0b10-ea3e-45d8-8526-4db82a41bea4" 00:16:08.289 ], 00:16:08.289 "product_name": "Raid Volume", 00:16:08.289 "block_size": 512, 00:16:08.289 "num_blocks": 65536, 00:16:08.289 "uuid": "38fb0b10-ea3e-45d8-8526-4db82a41bea4", 00:16:08.289 "assigned_rate_limits": { 00:16:08.289 "rw_ios_per_sec": 0, 00:16:08.289 "rw_mbytes_per_sec": 0, 00:16:08.289 "r_mbytes_per_sec": 0, 00:16:08.289 "w_mbytes_per_sec": 0 00:16:08.289 }, 00:16:08.289 "claimed": false, 00:16:08.289 "zoned": false, 00:16:08.289 "supported_io_types": { 00:16:08.289 "read": true, 00:16:08.289 "write": true, 00:16:08.289 "unmap": false, 00:16:08.289 "flush": false, 00:16:08.289 "reset": true, 00:16:08.289 "nvme_admin": false, 00:16:08.289 "nvme_io": false, 00:16:08.289 "nvme_io_md": false, 00:16:08.289 "write_zeroes": true, 00:16:08.289 "zcopy": false, 00:16:08.289 "get_zone_info": false, 00:16:08.289 "zone_management": false, 00:16:08.289 "zone_append": false, 00:16:08.289 "compare": false, 00:16:08.289 "compare_and_write": false, 00:16:08.289 "abort": false, 00:16:08.289 "seek_hole": false, 00:16:08.289 "seek_data": false, 00:16:08.289 "copy": false, 00:16:08.289 "nvme_iov_md": false 00:16:08.289 }, 00:16:08.289 "memory_domains": [ 00:16:08.289 { 00:16:08.289 "dma_device_id": "system", 00:16:08.289 "dma_device_type": 1 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.289 "dma_device_type": 2 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "dma_device_id": "system", 00:16:08.289 "dma_device_type": 1 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.289 "dma_device_type": 2 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "dma_device_id": "system", 00:16:08.289 "dma_device_type": 1 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.289 "dma_device_type": 2 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "dma_device_id": "system", 00:16:08.289 "dma_device_type": 1 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.289 "dma_device_type": 2 00:16:08.289 } 00:16:08.289 ], 00:16:08.289 "driver_specific": { 00:16:08.289 "raid": { 00:16:08.289 "uuid": "38fb0b10-ea3e-45d8-8526-4db82a41bea4", 00:16:08.289 "strip_size_kb": 0, 00:16:08.289 "state": "online", 00:16:08.289 "raid_level": "raid1", 00:16:08.289 "superblock": false, 00:16:08.289 "num_base_bdevs": 4, 00:16:08.289 "num_base_bdevs_discovered": 4, 00:16:08.289 "num_base_bdevs_operational": 4, 00:16:08.289 "base_bdevs_list": [ 00:16:08.289 { 00:16:08.289 "name": "BaseBdev1", 00:16:08.289 "uuid": "3451a6ea-214b-4420-8e6d-97eb591de5ed", 00:16:08.289 "is_configured": true, 00:16:08.289 "data_offset": 0, 00:16:08.289 "data_size": 65536 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "name": "BaseBdev2", 00:16:08.289 "uuid": "6f5c1dc7-bff2-4130-9c92-0811bf60a3ed", 00:16:08.289 "is_configured": true, 00:16:08.289 "data_offset": 0, 00:16:08.289 "data_size": 65536 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "name": "BaseBdev3", 00:16:08.289 "uuid": "dc06403e-abe0-44c5-8c41-4b6a3a49b7b5", 00:16:08.289 "is_configured": true, 00:16:08.289 "data_offset": 0, 00:16:08.289 "data_size": 65536 00:16:08.289 }, 00:16:08.289 { 00:16:08.289 "name": "BaseBdev4", 00:16:08.289 "uuid": "70a7d3b0-87fa-47e4-a238-2fbb60054d67", 00:16:08.289 "is_configured": true, 00:16:08.289 "data_offset": 0, 00:16:08.289 "data_size": 65536 00:16:08.289 } 00:16:08.289 ] 00:16:08.289 } 00:16:08.289 } 00:16:08.289 }' 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.289 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:08.289 BaseBdev2 00:16:08.289 BaseBdev3 00:16:08.290 BaseBdev4' 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.290 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.548 [2024-10-30 10:44:29.914981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:08.548 10:44:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.548 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.807 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.807 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.807 "name": "Existed_Raid", 00:16:08.807 "uuid": "38fb0b10-ea3e-45d8-8526-4db82a41bea4", 00:16:08.807 "strip_size_kb": 0, 00:16:08.807 "state": "online", 00:16:08.807 "raid_level": "raid1", 00:16:08.807 "superblock": false, 00:16:08.807 "num_base_bdevs": 4, 00:16:08.807 "num_base_bdevs_discovered": 3, 00:16:08.807 "num_base_bdevs_operational": 3, 00:16:08.807 "base_bdevs_list": [ 00:16:08.807 { 00:16:08.807 "name": null, 00:16:08.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.807 "is_configured": false, 00:16:08.807 "data_offset": 0, 00:16:08.807 "data_size": 65536 00:16:08.807 }, 00:16:08.807 { 00:16:08.807 "name": "BaseBdev2", 00:16:08.807 "uuid": "6f5c1dc7-bff2-4130-9c92-0811bf60a3ed", 00:16:08.807 "is_configured": true, 00:16:08.807 "data_offset": 0, 00:16:08.807 "data_size": 65536 00:16:08.807 }, 00:16:08.807 { 00:16:08.807 "name": "BaseBdev3", 00:16:08.807 "uuid": "dc06403e-abe0-44c5-8c41-4b6a3a49b7b5", 00:16:08.807 "is_configured": true, 00:16:08.807 "data_offset": 0, 00:16:08.807 "data_size": 65536 00:16:08.807 }, 00:16:08.807 { 00:16:08.807 "name": "BaseBdev4", 00:16:08.807 "uuid": "70a7d3b0-87fa-47e4-a238-2fbb60054d67", 00:16:08.807 "is_configured": true, 00:16:08.807 "data_offset": 0, 00:16:08.807 "data_size": 65536 00:16:08.807 } 00:16:08.807 ] 00:16:08.807 }' 00:16:08.807 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.807 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.374 [2024-10-30 10:44:30.626386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.374 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.374 [2024-10-30 10:44:30.771741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.645 [2024-10-30 10:44:30.917747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:09.645 [2024-10-30 10:44:30.917872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.645 [2024-10-30 10:44:31.000385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.645 [2024-10-30 10:44:31.000469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.645 [2024-10-30 10:44:31.000490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:09.645 10:44:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.645 BaseBdev2 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:09.645 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:09.646 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:09.646 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:09.646 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.646 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.646 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.646 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:09.646 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.646 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.907 [ 00:16:09.907 { 00:16:09.907 "name": "BaseBdev2", 00:16:09.907 "aliases": [ 00:16:09.907 "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d" 00:16:09.907 ], 00:16:09.907 "product_name": "Malloc disk", 00:16:09.907 "block_size": 512, 00:16:09.907 "num_blocks": 65536, 00:16:09.907 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:09.907 "assigned_rate_limits": { 00:16:09.907 "rw_ios_per_sec": 0, 00:16:09.907 "rw_mbytes_per_sec": 0, 00:16:09.907 "r_mbytes_per_sec": 0, 00:16:09.907 "w_mbytes_per_sec": 0 00:16:09.907 }, 00:16:09.907 "claimed": false, 00:16:09.907 "zoned": false, 00:16:09.907 "supported_io_types": { 00:16:09.907 "read": true, 00:16:09.907 "write": true, 00:16:09.907 "unmap": true, 00:16:09.907 "flush": true, 00:16:09.907 "reset": true, 00:16:09.907 "nvme_admin": false, 00:16:09.907 "nvme_io": false, 00:16:09.907 "nvme_io_md": false, 00:16:09.907 "write_zeroes": true, 00:16:09.907 "zcopy": true, 00:16:09.907 "get_zone_info": false, 00:16:09.907 "zone_management": false, 00:16:09.907 "zone_append": false, 00:16:09.907 "compare": false, 00:16:09.907 "compare_and_write": false, 00:16:09.907 "abort": true, 00:16:09.907 "seek_hole": false, 00:16:09.907 "seek_data": false, 00:16:09.907 "copy": true, 00:16:09.907 "nvme_iov_md": false 00:16:09.907 }, 00:16:09.907 "memory_domains": [ 00:16:09.907 { 00:16:09.907 "dma_device_id": "system", 00:16:09.907 "dma_device_type": 1 00:16:09.907 }, 00:16:09.907 { 00:16:09.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.907 "dma_device_type": 2 00:16:09.907 } 00:16:09.907 ], 00:16:09.907 "driver_specific": {} 00:16:09.907 } 00:16:09.907 ] 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.907 BaseBdev3 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.907 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.908 [ 00:16:09.908 { 00:16:09.908 "name": "BaseBdev3", 00:16:09.908 "aliases": [ 00:16:09.908 "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8" 00:16:09.908 ], 00:16:09.908 "product_name": "Malloc disk", 00:16:09.908 "block_size": 512, 00:16:09.908 "num_blocks": 65536, 00:16:09.908 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:09.908 "assigned_rate_limits": { 00:16:09.908 "rw_ios_per_sec": 0, 00:16:09.908 "rw_mbytes_per_sec": 0, 00:16:09.908 "r_mbytes_per_sec": 0, 00:16:09.908 "w_mbytes_per_sec": 0 00:16:09.908 }, 00:16:09.908 "claimed": false, 00:16:09.908 "zoned": false, 00:16:09.908 "supported_io_types": { 00:16:09.908 "read": true, 00:16:09.908 "write": true, 00:16:09.908 "unmap": true, 00:16:09.908 "flush": true, 00:16:09.908 "reset": true, 00:16:09.908 "nvme_admin": false, 00:16:09.908 "nvme_io": false, 00:16:09.908 "nvme_io_md": false, 00:16:09.908 "write_zeroes": true, 00:16:09.908 "zcopy": true, 00:16:09.908 "get_zone_info": false, 00:16:09.908 "zone_management": false, 00:16:09.908 "zone_append": false, 00:16:09.908 "compare": false, 00:16:09.908 "compare_and_write": false, 00:16:09.908 "abort": true, 00:16:09.908 "seek_hole": false, 00:16:09.908 "seek_data": false, 00:16:09.908 "copy": true, 00:16:09.908 "nvme_iov_md": false 00:16:09.908 }, 00:16:09.908 "memory_domains": [ 00:16:09.908 { 00:16:09.908 "dma_device_id": "system", 00:16:09.908 "dma_device_type": 1 00:16:09.908 }, 00:16:09.908 { 00:16:09.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.908 "dma_device_type": 2 00:16:09.908 } 00:16:09.908 ], 00:16:09.908 "driver_specific": {} 00:16:09.908 } 00:16:09.908 ] 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.908 BaseBdev4 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.908 [ 00:16:09.908 { 00:16:09.908 "name": "BaseBdev4", 00:16:09.908 "aliases": [ 00:16:09.908 "4e93f238-59df-4e15-b479-5b84e6bc1936" 00:16:09.908 ], 00:16:09.908 "product_name": "Malloc disk", 00:16:09.908 "block_size": 512, 00:16:09.908 "num_blocks": 65536, 00:16:09.908 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:09.908 "assigned_rate_limits": { 00:16:09.908 "rw_ios_per_sec": 0, 00:16:09.908 "rw_mbytes_per_sec": 0, 00:16:09.908 "r_mbytes_per_sec": 0, 00:16:09.908 "w_mbytes_per_sec": 0 00:16:09.908 }, 00:16:09.908 "claimed": false, 00:16:09.908 "zoned": false, 00:16:09.908 "supported_io_types": { 00:16:09.908 "read": true, 00:16:09.908 "write": true, 00:16:09.908 "unmap": true, 00:16:09.908 "flush": true, 00:16:09.908 "reset": true, 00:16:09.908 "nvme_admin": false, 00:16:09.908 "nvme_io": false, 00:16:09.908 "nvme_io_md": false, 00:16:09.908 "write_zeroes": true, 00:16:09.908 "zcopy": true, 00:16:09.908 "get_zone_info": false, 00:16:09.908 "zone_management": false, 00:16:09.908 "zone_append": false, 00:16:09.908 "compare": false, 00:16:09.908 "compare_and_write": false, 00:16:09.908 "abort": true, 00:16:09.908 "seek_hole": false, 00:16:09.908 "seek_data": false, 00:16:09.908 "copy": true, 00:16:09.908 "nvme_iov_md": false 00:16:09.908 }, 00:16:09.908 "memory_domains": [ 00:16:09.908 { 00:16:09.908 "dma_device_id": "system", 00:16:09.908 "dma_device_type": 1 00:16:09.908 }, 00:16:09.908 { 00:16:09.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.908 "dma_device_type": 2 00:16:09.908 } 00:16:09.908 ], 00:16:09.908 "driver_specific": {} 00:16:09.908 } 00:16:09.908 ] 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.908 [2024-10-30 10:44:31.255224] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.908 [2024-10-30 10:44:31.255844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.908 [2024-10-30 10:44:31.255951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.908 [2024-10-30 10:44:31.261048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.908 [2024-10-30 10:44:31.261208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.908 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.909 "name": "Existed_Raid", 00:16:09.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.909 "strip_size_kb": 0, 00:16:09.909 "state": "configuring", 00:16:09.909 "raid_level": "raid1", 00:16:09.909 "superblock": false, 00:16:09.909 "num_base_bdevs": 4, 00:16:09.909 "num_base_bdevs_discovered": 3, 00:16:09.909 "num_base_bdevs_operational": 4, 00:16:09.909 "base_bdevs_list": [ 00:16:09.909 { 00:16:09.909 "name": "BaseBdev1", 00:16:09.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.909 "is_configured": false, 00:16:09.909 "data_offset": 0, 00:16:09.909 "data_size": 0 00:16:09.909 }, 00:16:09.909 { 00:16:09.909 "name": "BaseBdev2", 00:16:09.909 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:09.909 "is_configured": true, 00:16:09.909 "data_offset": 0, 00:16:09.909 "data_size": 65536 00:16:09.909 }, 00:16:09.909 { 00:16:09.909 "name": "BaseBdev3", 00:16:09.909 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:09.909 "is_configured": true, 00:16:09.909 "data_offset": 0, 00:16:09.909 "data_size": 65536 00:16:09.909 }, 00:16:09.909 { 00:16:09.909 "name": "BaseBdev4", 00:16:09.909 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:09.909 "is_configured": true, 00:16:09.909 "data_offset": 0, 00:16:09.909 "data_size": 65536 00:16:09.909 } 00:16:09.909 ] 00:16:09.909 }' 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.909 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.475 [2024-10-30 10:44:31.761637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.475 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.475 "name": "Existed_Raid", 00:16:10.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.475 "strip_size_kb": 0, 00:16:10.475 "state": "configuring", 00:16:10.475 "raid_level": "raid1", 00:16:10.475 "superblock": false, 00:16:10.475 "num_base_bdevs": 4, 00:16:10.475 "num_base_bdevs_discovered": 2, 00:16:10.475 "num_base_bdevs_operational": 4, 00:16:10.475 "base_bdevs_list": [ 00:16:10.475 { 00:16:10.475 "name": "BaseBdev1", 00:16:10.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.475 "is_configured": false, 00:16:10.475 "data_offset": 0, 00:16:10.475 "data_size": 0 00:16:10.476 }, 00:16:10.476 { 00:16:10.476 "name": null, 00:16:10.476 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:10.476 "is_configured": false, 00:16:10.476 "data_offset": 0, 00:16:10.476 "data_size": 65536 00:16:10.476 }, 00:16:10.476 { 00:16:10.476 "name": "BaseBdev3", 00:16:10.476 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:10.476 "is_configured": true, 00:16:10.476 "data_offset": 0, 00:16:10.476 "data_size": 65536 00:16:10.476 }, 00:16:10.476 { 00:16:10.476 "name": "BaseBdev4", 00:16:10.476 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:10.476 "is_configured": true, 00:16:10.476 "data_offset": 0, 00:16:10.476 "data_size": 65536 00:16:10.476 } 00:16:10.476 ] 00:16:10.476 }' 00:16:10.476 10:44:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.476 10:44:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.042 [2024-10-30 10:44:32.365828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.042 BaseBdev1 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.042 [ 00:16:11.042 { 00:16:11.042 "name": "BaseBdev1", 00:16:11.042 "aliases": [ 00:16:11.042 "bd58b7e0-14df-46b3-b249-a1094fb737a0" 00:16:11.042 ], 00:16:11.042 "product_name": "Malloc disk", 00:16:11.042 "block_size": 512, 00:16:11.042 "num_blocks": 65536, 00:16:11.042 "uuid": "bd58b7e0-14df-46b3-b249-a1094fb737a0", 00:16:11.042 "assigned_rate_limits": { 00:16:11.042 "rw_ios_per_sec": 0, 00:16:11.042 "rw_mbytes_per_sec": 0, 00:16:11.042 "r_mbytes_per_sec": 0, 00:16:11.042 "w_mbytes_per_sec": 0 00:16:11.042 }, 00:16:11.042 "claimed": true, 00:16:11.042 "claim_type": "exclusive_write", 00:16:11.042 "zoned": false, 00:16:11.042 "supported_io_types": { 00:16:11.042 "read": true, 00:16:11.042 "write": true, 00:16:11.042 "unmap": true, 00:16:11.042 "flush": true, 00:16:11.042 "reset": true, 00:16:11.042 "nvme_admin": false, 00:16:11.042 "nvme_io": false, 00:16:11.042 "nvme_io_md": false, 00:16:11.042 "write_zeroes": true, 00:16:11.042 "zcopy": true, 00:16:11.042 "get_zone_info": false, 00:16:11.042 "zone_management": false, 00:16:11.042 "zone_append": false, 00:16:11.042 "compare": false, 00:16:11.042 "compare_and_write": false, 00:16:11.042 "abort": true, 00:16:11.042 "seek_hole": false, 00:16:11.042 "seek_data": false, 00:16:11.042 "copy": true, 00:16:11.042 "nvme_iov_md": false 00:16:11.042 }, 00:16:11.042 "memory_domains": [ 00:16:11.042 { 00:16:11.042 "dma_device_id": "system", 00:16:11.042 "dma_device_type": 1 00:16:11.042 }, 00:16:11.042 { 00:16:11.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.042 "dma_device_type": 2 00:16:11.042 } 00:16:11.042 ], 00:16:11.042 "driver_specific": {} 00:16:11.042 } 00:16:11.042 ] 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.042 "name": "Existed_Raid", 00:16:11.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.042 "strip_size_kb": 0, 00:16:11.042 "state": "configuring", 00:16:11.042 "raid_level": "raid1", 00:16:11.042 "superblock": false, 00:16:11.042 "num_base_bdevs": 4, 00:16:11.042 "num_base_bdevs_discovered": 3, 00:16:11.042 "num_base_bdevs_operational": 4, 00:16:11.042 "base_bdevs_list": [ 00:16:11.042 { 00:16:11.042 "name": "BaseBdev1", 00:16:11.042 "uuid": "bd58b7e0-14df-46b3-b249-a1094fb737a0", 00:16:11.042 "is_configured": true, 00:16:11.042 "data_offset": 0, 00:16:11.042 "data_size": 65536 00:16:11.042 }, 00:16:11.042 { 00:16:11.042 "name": null, 00:16:11.042 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:11.042 "is_configured": false, 00:16:11.042 "data_offset": 0, 00:16:11.042 "data_size": 65536 00:16:11.042 }, 00:16:11.042 { 00:16:11.042 "name": "BaseBdev3", 00:16:11.042 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:11.042 "is_configured": true, 00:16:11.042 "data_offset": 0, 00:16:11.042 "data_size": 65536 00:16:11.042 }, 00:16:11.042 { 00:16:11.042 "name": "BaseBdev4", 00:16:11.042 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:11.042 "is_configured": true, 00:16:11.042 "data_offset": 0, 00:16:11.042 "data_size": 65536 00:16:11.042 } 00:16:11.042 ] 00:16:11.042 }' 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.042 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.609 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.609 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:11.609 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.609 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.609 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.609 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:11.609 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:11.609 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.609 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.610 [2024-10-30 10:44:32.970139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.610 10:44:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.610 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.610 "name": "Existed_Raid", 00:16:11.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.610 "strip_size_kb": 0, 00:16:11.610 "state": "configuring", 00:16:11.610 "raid_level": "raid1", 00:16:11.610 "superblock": false, 00:16:11.610 "num_base_bdevs": 4, 00:16:11.610 "num_base_bdevs_discovered": 2, 00:16:11.610 "num_base_bdevs_operational": 4, 00:16:11.610 "base_bdevs_list": [ 00:16:11.610 { 00:16:11.610 "name": "BaseBdev1", 00:16:11.610 "uuid": "bd58b7e0-14df-46b3-b249-a1094fb737a0", 00:16:11.610 "is_configured": true, 00:16:11.610 "data_offset": 0, 00:16:11.610 "data_size": 65536 00:16:11.610 }, 00:16:11.610 { 00:16:11.610 "name": null, 00:16:11.610 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:11.610 "is_configured": false, 00:16:11.610 "data_offset": 0, 00:16:11.610 "data_size": 65536 00:16:11.610 }, 00:16:11.610 { 00:16:11.610 "name": null, 00:16:11.610 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:11.610 "is_configured": false, 00:16:11.610 "data_offset": 0, 00:16:11.610 "data_size": 65536 00:16:11.610 }, 00:16:11.610 { 00:16:11.610 "name": "BaseBdev4", 00:16:11.610 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:11.610 "is_configured": true, 00:16:11.610 "data_offset": 0, 00:16:11.610 "data_size": 65536 00:16:11.610 } 00:16:11.610 ] 00:16:11.610 }' 00:16:11.610 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.610 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.176 [2024-10-30 10:44:33.526250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.176 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.176 "name": "Existed_Raid", 00:16:12.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.176 "strip_size_kb": 0, 00:16:12.176 "state": "configuring", 00:16:12.176 "raid_level": "raid1", 00:16:12.176 "superblock": false, 00:16:12.176 "num_base_bdevs": 4, 00:16:12.176 "num_base_bdevs_discovered": 3, 00:16:12.176 "num_base_bdevs_operational": 4, 00:16:12.176 "base_bdevs_list": [ 00:16:12.176 { 00:16:12.176 "name": "BaseBdev1", 00:16:12.176 "uuid": "bd58b7e0-14df-46b3-b249-a1094fb737a0", 00:16:12.176 "is_configured": true, 00:16:12.176 "data_offset": 0, 00:16:12.176 "data_size": 65536 00:16:12.176 }, 00:16:12.176 { 00:16:12.177 "name": null, 00:16:12.177 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:12.177 "is_configured": false, 00:16:12.177 "data_offset": 0, 00:16:12.177 "data_size": 65536 00:16:12.177 }, 00:16:12.177 { 00:16:12.177 "name": "BaseBdev3", 00:16:12.177 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:12.177 "is_configured": true, 00:16:12.177 "data_offset": 0, 00:16:12.177 "data_size": 65536 00:16:12.177 }, 00:16:12.177 { 00:16:12.177 "name": "BaseBdev4", 00:16:12.177 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:12.177 "is_configured": true, 00:16:12.177 "data_offset": 0, 00:16:12.177 "data_size": 65536 00:16:12.177 } 00:16:12.177 ] 00:16:12.177 }' 00:16:12.177 10:44:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.177 10:44:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.744 [2024-10-30 10:44:34.110490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.744 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.745 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.004 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.004 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.004 "name": "Existed_Raid", 00:16:13.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.004 "strip_size_kb": 0, 00:16:13.004 "state": "configuring", 00:16:13.004 "raid_level": "raid1", 00:16:13.004 "superblock": false, 00:16:13.004 "num_base_bdevs": 4, 00:16:13.004 "num_base_bdevs_discovered": 2, 00:16:13.004 "num_base_bdevs_operational": 4, 00:16:13.004 "base_bdevs_list": [ 00:16:13.004 { 00:16:13.004 "name": null, 00:16:13.004 "uuid": "bd58b7e0-14df-46b3-b249-a1094fb737a0", 00:16:13.004 "is_configured": false, 00:16:13.004 "data_offset": 0, 00:16:13.004 "data_size": 65536 00:16:13.004 }, 00:16:13.004 { 00:16:13.004 "name": null, 00:16:13.004 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:13.004 "is_configured": false, 00:16:13.004 "data_offset": 0, 00:16:13.004 "data_size": 65536 00:16:13.004 }, 00:16:13.004 { 00:16:13.004 "name": "BaseBdev3", 00:16:13.004 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:13.004 "is_configured": true, 00:16:13.004 "data_offset": 0, 00:16:13.004 "data_size": 65536 00:16:13.004 }, 00:16:13.004 { 00:16:13.004 "name": "BaseBdev4", 00:16:13.004 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:13.004 "is_configured": true, 00:16:13.004 "data_offset": 0, 00:16:13.004 "data_size": 65536 00:16:13.004 } 00:16:13.004 ] 00:16:13.004 }' 00:16:13.004 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.004 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.572 [2024-10-30 10:44:34.787114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.572 "name": "Existed_Raid", 00:16:13.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.572 "strip_size_kb": 0, 00:16:13.572 "state": "configuring", 00:16:13.572 "raid_level": "raid1", 00:16:13.572 "superblock": false, 00:16:13.572 "num_base_bdevs": 4, 00:16:13.572 "num_base_bdevs_discovered": 3, 00:16:13.572 "num_base_bdevs_operational": 4, 00:16:13.572 "base_bdevs_list": [ 00:16:13.572 { 00:16:13.572 "name": null, 00:16:13.572 "uuid": "bd58b7e0-14df-46b3-b249-a1094fb737a0", 00:16:13.572 "is_configured": false, 00:16:13.572 "data_offset": 0, 00:16:13.572 "data_size": 65536 00:16:13.572 }, 00:16:13.572 { 00:16:13.572 "name": "BaseBdev2", 00:16:13.572 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:13.572 "is_configured": true, 00:16:13.572 "data_offset": 0, 00:16:13.572 "data_size": 65536 00:16:13.572 }, 00:16:13.572 { 00:16:13.572 "name": "BaseBdev3", 00:16:13.572 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:13.572 "is_configured": true, 00:16:13.572 "data_offset": 0, 00:16:13.572 "data_size": 65536 00:16:13.572 }, 00:16:13.572 { 00:16:13.572 "name": "BaseBdev4", 00:16:13.572 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:13.572 "is_configured": true, 00:16:13.572 "data_offset": 0, 00:16:13.572 "data_size": 65536 00:16:13.572 } 00:16:13.572 ] 00:16:13.572 }' 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.572 10:44:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bd58b7e0-14df-46b3-b249-a1094fb737a0 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 [2024-10-30 10:44:35.469541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:14.146 [2024-10-30 10:44:35.469613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:14.146 [2024-10-30 10:44:35.469629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:14.146 [2024-10-30 10:44:35.469998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:14.146 [2024-10-30 10:44:35.470232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:14.146 [2024-10-30 10:44:35.470255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:14.146 [2024-10-30 10:44:35.470552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.146 NewBaseBdev 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 [ 00:16:14.146 { 00:16:14.146 "name": "NewBaseBdev", 00:16:14.146 "aliases": [ 00:16:14.146 "bd58b7e0-14df-46b3-b249-a1094fb737a0" 00:16:14.146 ], 00:16:14.146 "product_name": "Malloc disk", 00:16:14.146 "block_size": 512, 00:16:14.146 "num_blocks": 65536, 00:16:14.146 "uuid": "bd58b7e0-14df-46b3-b249-a1094fb737a0", 00:16:14.146 "assigned_rate_limits": { 00:16:14.146 "rw_ios_per_sec": 0, 00:16:14.146 "rw_mbytes_per_sec": 0, 00:16:14.146 "r_mbytes_per_sec": 0, 00:16:14.146 "w_mbytes_per_sec": 0 00:16:14.146 }, 00:16:14.146 "claimed": true, 00:16:14.146 "claim_type": "exclusive_write", 00:16:14.146 "zoned": false, 00:16:14.146 "supported_io_types": { 00:16:14.146 "read": true, 00:16:14.146 "write": true, 00:16:14.146 "unmap": true, 00:16:14.146 "flush": true, 00:16:14.146 "reset": true, 00:16:14.146 "nvme_admin": false, 00:16:14.146 "nvme_io": false, 00:16:14.146 "nvme_io_md": false, 00:16:14.146 "write_zeroes": true, 00:16:14.146 "zcopy": true, 00:16:14.146 "get_zone_info": false, 00:16:14.146 "zone_management": false, 00:16:14.146 "zone_append": false, 00:16:14.146 "compare": false, 00:16:14.146 "compare_and_write": false, 00:16:14.146 "abort": true, 00:16:14.146 "seek_hole": false, 00:16:14.146 "seek_data": false, 00:16:14.146 "copy": true, 00:16:14.146 "nvme_iov_md": false 00:16:14.146 }, 00:16:14.146 "memory_domains": [ 00:16:14.146 { 00:16:14.146 "dma_device_id": "system", 00:16:14.146 "dma_device_type": 1 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.146 "dma_device_type": 2 00:16:14.146 } 00:16:14.146 ], 00:16:14.146 "driver_specific": {} 00:16:14.146 } 00:16:14.146 ] 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.146 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.146 "name": "Existed_Raid", 00:16:14.146 "uuid": "6fe7bfee-c193-4cf0-96e3-5908cb392fcd", 00:16:14.146 "strip_size_kb": 0, 00:16:14.146 "state": "online", 00:16:14.146 "raid_level": "raid1", 00:16:14.146 "superblock": false, 00:16:14.146 "num_base_bdevs": 4, 00:16:14.146 "num_base_bdevs_discovered": 4, 00:16:14.146 "num_base_bdevs_operational": 4, 00:16:14.146 "base_bdevs_list": [ 00:16:14.146 { 00:16:14.146 "name": "NewBaseBdev", 00:16:14.146 "uuid": "bd58b7e0-14df-46b3-b249-a1094fb737a0", 00:16:14.146 "is_configured": true, 00:16:14.146 "data_offset": 0, 00:16:14.146 "data_size": 65536 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "name": "BaseBdev2", 00:16:14.146 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:14.146 "is_configured": true, 00:16:14.146 "data_offset": 0, 00:16:14.146 "data_size": 65536 00:16:14.146 }, 00:16:14.146 { 00:16:14.146 "name": "BaseBdev3", 00:16:14.146 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:14.146 "is_configured": true, 00:16:14.146 "data_offset": 0, 00:16:14.146 "data_size": 65536 00:16:14.146 }, 00:16:14.147 { 00:16:14.147 "name": "BaseBdev4", 00:16:14.147 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:14.147 "is_configured": true, 00:16:14.147 "data_offset": 0, 00:16:14.147 "data_size": 65536 00:16:14.147 } 00:16:14.147 ] 00:16:14.147 }' 00:16:14.147 10:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.147 10:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 [2024-10-30 10:44:36.018433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:14.726 "name": "Existed_Raid", 00:16:14.726 "aliases": [ 00:16:14.726 "6fe7bfee-c193-4cf0-96e3-5908cb392fcd" 00:16:14.726 ], 00:16:14.726 "product_name": "Raid Volume", 00:16:14.726 "block_size": 512, 00:16:14.726 "num_blocks": 65536, 00:16:14.726 "uuid": "6fe7bfee-c193-4cf0-96e3-5908cb392fcd", 00:16:14.726 "assigned_rate_limits": { 00:16:14.726 "rw_ios_per_sec": 0, 00:16:14.726 "rw_mbytes_per_sec": 0, 00:16:14.726 "r_mbytes_per_sec": 0, 00:16:14.726 "w_mbytes_per_sec": 0 00:16:14.726 }, 00:16:14.726 "claimed": false, 00:16:14.726 "zoned": false, 00:16:14.726 "supported_io_types": { 00:16:14.726 "read": true, 00:16:14.726 "write": true, 00:16:14.726 "unmap": false, 00:16:14.726 "flush": false, 00:16:14.726 "reset": true, 00:16:14.726 "nvme_admin": false, 00:16:14.726 "nvme_io": false, 00:16:14.726 "nvme_io_md": false, 00:16:14.726 "write_zeroes": true, 00:16:14.726 "zcopy": false, 00:16:14.726 "get_zone_info": false, 00:16:14.726 "zone_management": false, 00:16:14.726 "zone_append": false, 00:16:14.726 "compare": false, 00:16:14.726 "compare_and_write": false, 00:16:14.726 "abort": false, 00:16:14.726 "seek_hole": false, 00:16:14.726 "seek_data": false, 00:16:14.726 "copy": false, 00:16:14.726 "nvme_iov_md": false 00:16:14.726 }, 00:16:14.726 "memory_domains": [ 00:16:14.726 { 00:16:14.726 "dma_device_id": "system", 00:16:14.726 "dma_device_type": 1 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.726 "dma_device_type": 2 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "dma_device_id": "system", 00:16:14.726 "dma_device_type": 1 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.726 "dma_device_type": 2 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "dma_device_id": "system", 00:16:14.726 "dma_device_type": 1 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.726 "dma_device_type": 2 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "dma_device_id": "system", 00:16:14.726 "dma_device_type": 1 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.726 "dma_device_type": 2 00:16:14.726 } 00:16:14.726 ], 00:16:14.726 "driver_specific": { 00:16:14.726 "raid": { 00:16:14.726 "uuid": "6fe7bfee-c193-4cf0-96e3-5908cb392fcd", 00:16:14.726 "strip_size_kb": 0, 00:16:14.726 "state": "online", 00:16:14.726 "raid_level": "raid1", 00:16:14.726 "superblock": false, 00:16:14.726 "num_base_bdevs": 4, 00:16:14.726 "num_base_bdevs_discovered": 4, 00:16:14.726 "num_base_bdevs_operational": 4, 00:16:14.726 "base_bdevs_list": [ 00:16:14.726 { 00:16:14.726 "name": "NewBaseBdev", 00:16:14.726 "uuid": "bd58b7e0-14df-46b3-b249-a1094fb737a0", 00:16:14.726 "is_configured": true, 00:16:14.726 "data_offset": 0, 00:16:14.726 "data_size": 65536 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "name": "BaseBdev2", 00:16:14.726 "uuid": "fef7ce24-c0f8-47d2-ba81-4dc6825cb36d", 00:16:14.726 "is_configured": true, 00:16:14.726 "data_offset": 0, 00:16:14.726 "data_size": 65536 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "name": "BaseBdev3", 00:16:14.726 "uuid": "efbd51f3-3eec-4954-aeb9-2b0a5bce8db8", 00:16:14.726 "is_configured": true, 00:16:14.726 "data_offset": 0, 00:16:14.726 "data_size": 65536 00:16:14.726 }, 00:16:14.726 { 00:16:14.726 "name": "BaseBdev4", 00:16:14.726 "uuid": "4e93f238-59df-4e15-b479-5b84e6bc1936", 00:16:14.726 "is_configured": true, 00:16:14.726 "data_offset": 0, 00:16:14.726 "data_size": 65536 00:16:14.726 } 00:16:14.726 ] 00:16:14.726 } 00:16:14.726 } 00:16:14.726 }' 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:14.726 BaseBdev2 00:16:14.726 BaseBdev3 00:16:14.726 BaseBdev4' 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.726 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.985 [2024-10-30 10:44:36.373843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.985 [2024-10-30 10:44:36.373892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.985 [2024-10-30 10:44:36.374015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.985 [2024-10-30 10:44:36.374384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.985 [2024-10-30 10:44:36.374415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73488 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73488 ']' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73488 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73488 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73488' 00:16:14.985 killing process with pid 73488 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73488 00:16:14.985 [2024-10-30 10:44:36.408986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.985 10:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73488 00:16:15.552 [2024-10-30 10:44:36.763833] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:16.490 00:16:16.490 real 0m12.787s 00:16:16.490 user 0m21.253s 00:16:16.490 sys 0m1.809s 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:16.490 ************************************ 00:16:16.490 END TEST raid_state_function_test 00:16:16.490 ************************************ 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.490 10:44:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:16:16.490 10:44:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:16.490 10:44:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:16.490 10:44:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.490 ************************************ 00:16:16.490 START TEST raid_state_function_test_sb 00:16:16.490 ************************************ 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74172 00:16:16.490 Process raid pid: 74172 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74172' 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74172 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74172 ']' 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:16.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:16.490 10:44:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.749 [2024-10-30 10:44:38.004583] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:16:16.749 [2024-10-30 10:44:38.005454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.749 [2024-10-30 10:44:38.185613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.006 [2024-10-30 10:44:38.318133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.264 [2024-10-30 10:44:38.513449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.264 [2024-10-30 10:44:38.513523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.522 10:44:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:17.522 10:44:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:17.522 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.522 10:44:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.522 10:44:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.781 [2024-10-30 10:44:38.994219] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.781 [2024-10-30 10:44:38.994336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.781 [2024-10-30 10:44:38.994368] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.781 [2024-10-30 10:44:38.994384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.781 [2024-10-30 10:44:38.994419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.781 [2024-10-30 10:44:38.994433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.781 [2024-10-30 10:44:38.994442] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:17.781 [2024-10-30 10:44:38.994457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.781 10:44:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.781 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.781 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.781 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.781 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.781 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.781 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.781 "name": "Existed_Raid", 00:16:17.781 "uuid": "c188e634-e57d-4557-90d1-15794bb19a55", 00:16:17.781 "strip_size_kb": 0, 00:16:17.781 "state": "configuring", 00:16:17.781 "raid_level": "raid1", 00:16:17.781 "superblock": true, 00:16:17.781 "num_base_bdevs": 4, 00:16:17.781 "num_base_bdevs_discovered": 0, 00:16:17.781 "num_base_bdevs_operational": 4, 00:16:17.781 "base_bdevs_list": [ 00:16:17.781 { 00:16:17.781 "name": "BaseBdev1", 00:16:17.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.781 "is_configured": false, 00:16:17.781 "data_offset": 0, 00:16:17.781 "data_size": 0 00:16:17.781 }, 00:16:17.781 { 00:16:17.781 "name": "BaseBdev2", 00:16:17.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.781 "is_configured": false, 00:16:17.781 "data_offset": 0, 00:16:17.781 "data_size": 0 00:16:17.781 }, 00:16:17.781 { 00:16:17.781 "name": "BaseBdev3", 00:16:17.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.781 "is_configured": false, 00:16:17.781 "data_offset": 0, 00:16:17.781 "data_size": 0 00:16:17.781 }, 00:16:17.781 { 00:16:17.781 "name": "BaseBdev4", 00:16:17.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.781 "is_configured": false, 00:16:17.781 "data_offset": 0, 00:16:17.781 "data_size": 0 00:16:17.781 } 00:16:17.781 ] 00:16:17.781 }' 00:16:17.781 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.781 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.040 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.040 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.040 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.040 [2024-10-30 10:44:39.502281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.040 [2024-10-30 10:44:39.502333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:18.040 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.040 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:18.040 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.040 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.300 [2024-10-30 10:44:39.510264] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.300 [2024-10-30 10:44:39.510316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.300 [2024-10-30 10:44:39.510331] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.300 [2024-10-30 10:44:39.510347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.300 [2024-10-30 10:44:39.510357] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.300 [2024-10-30 10:44:39.510372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.300 [2024-10-30 10:44:39.510382] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:18.300 [2024-10-30 10:44:39.510396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.300 [2024-10-30 10:44:39.553528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.300 BaseBdev1 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.300 [ 00:16:18.300 { 00:16:18.300 "name": "BaseBdev1", 00:16:18.300 "aliases": [ 00:16:18.300 "130acdec-102b-4759-af33-5f9e52d2d4b8" 00:16:18.300 ], 00:16:18.300 "product_name": "Malloc disk", 00:16:18.300 "block_size": 512, 00:16:18.300 "num_blocks": 65536, 00:16:18.300 "uuid": "130acdec-102b-4759-af33-5f9e52d2d4b8", 00:16:18.300 "assigned_rate_limits": { 00:16:18.300 "rw_ios_per_sec": 0, 00:16:18.300 "rw_mbytes_per_sec": 0, 00:16:18.300 "r_mbytes_per_sec": 0, 00:16:18.300 "w_mbytes_per_sec": 0 00:16:18.300 }, 00:16:18.300 "claimed": true, 00:16:18.300 "claim_type": "exclusive_write", 00:16:18.300 "zoned": false, 00:16:18.300 "supported_io_types": { 00:16:18.300 "read": true, 00:16:18.300 "write": true, 00:16:18.300 "unmap": true, 00:16:18.300 "flush": true, 00:16:18.300 "reset": true, 00:16:18.300 "nvme_admin": false, 00:16:18.300 "nvme_io": false, 00:16:18.300 "nvme_io_md": false, 00:16:18.300 "write_zeroes": true, 00:16:18.300 "zcopy": true, 00:16:18.300 "get_zone_info": false, 00:16:18.300 "zone_management": false, 00:16:18.300 "zone_append": false, 00:16:18.300 "compare": false, 00:16:18.300 "compare_and_write": false, 00:16:18.300 "abort": true, 00:16:18.300 "seek_hole": false, 00:16:18.300 "seek_data": false, 00:16:18.300 "copy": true, 00:16:18.300 "nvme_iov_md": false 00:16:18.300 }, 00:16:18.300 "memory_domains": [ 00:16:18.300 { 00:16:18.300 "dma_device_id": "system", 00:16:18.300 "dma_device_type": 1 00:16:18.300 }, 00:16:18.300 { 00:16:18.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.300 "dma_device_type": 2 00:16:18.300 } 00:16:18.300 ], 00:16:18.300 "driver_specific": {} 00:16:18.300 } 00:16:18.300 ] 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:18.300 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.301 "name": "Existed_Raid", 00:16:18.301 "uuid": "b8926a4d-2c71-4ed2-9087-09709dccc53c", 00:16:18.301 "strip_size_kb": 0, 00:16:18.301 "state": "configuring", 00:16:18.301 "raid_level": "raid1", 00:16:18.301 "superblock": true, 00:16:18.301 "num_base_bdevs": 4, 00:16:18.301 "num_base_bdevs_discovered": 1, 00:16:18.301 "num_base_bdevs_operational": 4, 00:16:18.301 "base_bdevs_list": [ 00:16:18.301 { 00:16:18.301 "name": "BaseBdev1", 00:16:18.301 "uuid": "130acdec-102b-4759-af33-5f9e52d2d4b8", 00:16:18.301 "is_configured": true, 00:16:18.301 "data_offset": 2048, 00:16:18.301 "data_size": 63488 00:16:18.301 }, 00:16:18.301 { 00:16:18.301 "name": "BaseBdev2", 00:16:18.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.301 "is_configured": false, 00:16:18.301 "data_offset": 0, 00:16:18.301 "data_size": 0 00:16:18.301 }, 00:16:18.301 { 00:16:18.301 "name": "BaseBdev3", 00:16:18.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.301 "is_configured": false, 00:16:18.301 "data_offset": 0, 00:16:18.301 "data_size": 0 00:16:18.301 }, 00:16:18.301 { 00:16:18.301 "name": "BaseBdev4", 00:16:18.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.301 "is_configured": false, 00:16:18.301 "data_offset": 0, 00:16:18.301 "data_size": 0 00:16:18.301 } 00:16:18.301 ] 00:16:18.301 }' 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.301 10:44:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.950 [2024-10-30 10:44:40.085717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.950 [2024-10-30 10:44:40.085798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.950 [2024-10-30 10:44:40.093772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.950 [2024-10-30 10:44:40.096416] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.950 [2024-10-30 10:44:40.096633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.950 [2024-10-30 10:44:40.096661] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.950 [2024-10-30 10:44:40.096681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.950 [2024-10-30 10:44:40.096693] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:18.950 [2024-10-30 10:44:40.096707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.950 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.951 "name": "Existed_Raid", 00:16:18.951 "uuid": "0d42b5ab-ca9a-44c4-95c9-6953c7b1d526", 00:16:18.951 "strip_size_kb": 0, 00:16:18.951 "state": "configuring", 00:16:18.951 "raid_level": "raid1", 00:16:18.951 "superblock": true, 00:16:18.951 "num_base_bdevs": 4, 00:16:18.951 "num_base_bdevs_discovered": 1, 00:16:18.951 "num_base_bdevs_operational": 4, 00:16:18.951 "base_bdevs_list": [ 00:16:18.951 { 00:16:18.951 "name": "BaseBdev1", 00:16:18.951 "uuid": "130acdec-102b-4759-af33-5f9e52d2d4b8", 00:16:18.951 "is_configured": true, 00:16:18.951 "data_offset": 2048, 00:16:18.951 "data_size": 63488 00:16:18.951 }, 00:16:18.951 { 00:16:18.951 "name": "BaseBdev2", 00:16:18.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.951 "is_configured": false, 00:16:18.951 "data_offset": 0, 00:16:18.951 "data_size": 0 00:16:18.951 }, 00:16:18.951 { 00:16:18.951 "name": "BaseBdev3", 00:16:18.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.951 "is_configured": false, 00:16:18.951 "data_offset": 0, 00:16:18.951 "data_size": 0 00:16:18.951 }, 00:16:18.951 { 00:16:18.951 "name": "BaseBdev4", 00:16:18.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.951 "is_configured": false, 00:16:18.951 "data_offset": 0, 00:16:18.951 "data_size": 0 00:16:18.951 } 00:16:18.951 ] 00:16:18.951 }' 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.951 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.210 [2024-10-30 10:44:40.668765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.210 BaseBdev2 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.210 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.469 [ 00:16:19.469 { 00:16:19.469 "name": "BaseBdev2", 00:16:19.469 "aliases": [ 00:16:19.469 "d1bdebaa-4dfb-4f42-bdca-553d21b74983" 00:16:19.469 ], 00:16:19.469 "product_name": "Malloc disk", 00:16:19.469 "block_size": 512, 00:16:19.469 "num_blocks": 65536, 00:16:19.469 "uuid": "d1bdebaa-4dfb-4f42-bdca-553d21b74983", 00:16:19.469 "assigned_rate_limits": { 00:16:19.469 "rw_ios_per_sec": 0, 00:16:19.469 "rw_mbytes_per_sec": 0, 00:16:19.469 "r_mbytes_per_sec": 0, 00:16:19.469 "w_mbytes_per_sec": 0 00:16:19.469 }, 00:16:19.469 "claimed": true, 00:16:19.469 "claim_type": "exclusive_write", 00:16:19.469 "zoned": false, 00:16:19.469 "supported_io_types": { 00:16:19.469 "read": true, 00:16:19.469 "write": true, 00:16:19.469 "unmap": true, 00:16:19.469 "flush": true, 00:16:19.469 "reset": true, 00:16:19.469 "nvme_admin": false, 00:16:19.469 "nvme_io": false, 00:16:19.469 "nvme_io_md": false, 00:16:19.469 "write_zeroes": true, 00:16:19.469 "zcopy": true, 00:16:19.469 "get_zone_info": false, 00:16:19.469 "zone_management": false, 00:16:19.469 "zone_append": false, 00:16:19.469 "compare": false, 00:16:19.469 "compare_and_write": false, 00:16:19.469 "abort": true, 00:16:19.469 "seek_hole": false, 00:16:19.469 "seek_data": false, 00:16:19.469 "copy": true, 00:16:19.469 "nvme_iov_md": false 00:16:19.469 }, 00:16:19.469 "memory_domains": [ 00:16:19.469 { 00:16:19.469 "dma_device_id": "system", 00:16:19.469 "dma_device_type": 1 00:16:19.469 }, 00:16:19.469 { 00:16:19.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.469 "dma_device_type": 2 00:16:19.469 } 00:16:19.469 ], 00:16:19.469 "driver_specific": {} 00:16:19.469 } 00:16:19.469 ] 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.469 "name": "Existed_Raid", 00:16:19.469 "uuid": "0d42b5ab-ca9a-44c4-95c9-6953c7b1d526", 00:16:19.469 "strip_size_kb": 0, 00:16:19.469 "state": "configuring", 00:16:19.469 "raid_level": "raid1", 00:16:19.469 "superblock": true, 00:16:19.469 "num_base_bdevs": 4, 00:16:19.469 "num_base_bdevs_discovered": 2, 00:16:19.469 "num_base_bdevs_operational": 4, 00:16:19.469 "base_bdevs_list": [ 00:16:19.469 { 00:16:19.469 "name": "BaseBdev1", 00:16:19.469 "uuid": "130acdec-102b-4759-af33-5f9e52d2d4b8", 00:16:19.469 "is_configured": true, 00:16:19.469 "data_offset": 2048, 00:16:19.469 "data_size": 63488 00:16:19.469 }, 00:16:19.469 { 00:16:19.469 "name": "BaseBdev2", 00:16:19.469 "uuid": "d1bdebaa-4dfb-4f42-bdca-553d21b74983", 00:16:19.469 "is_configured": true, 00:16:19.469 "data_offset": 2048, 00:16:19.469 "data_size": 63488 00:16:19.469 }, 00:16:19.469 { 00:16:19.469 "name": "BaseBdev3", 00:16:19.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.469 "is_configured": false, 00:16:19.469 "data_offset": 0, 00:16:19.469 "data_size": 0 00:16:19.469 }, 00:16:19.469 { 00:16:19.469 "name": "BaseBdev4", 00:16:19.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.469 "is_configured": false, 00:16:19.469 "data_offset": 0, 00:16:19.469 "data_size": 0 00:16:19.469 } 00:16:19.469 ] 00:16:19.469 }' 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.469 10:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.038 [2024-10-30 10:44:41.263594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:20.038 BaseBdev3 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.038 [ 00:16:20.038 { 00:16:20.038 "name": "BaseBdev3", 00:16:20.038 "aliases": [ 00:16:20.038 "f791a4dd-ad63-4d7d-a1f7-8ccfa2803a9a" 00:16:20.038 ], 00:16:20.038 "product_name": "Malloc disk", 00:16:20.038 "block_size": 512, 00:16:20.038 "num_blocks": 65536, 00:16:20.038 "uuid": "f791a4dd-ad63-4d7d-a1f7-8ccfa2803a9a", 00:16:20.038 "assigned_rate_limits": { 00:16:20.038 "rw_ios_per_sec": 0, 00:16:20.038 "rw_mbytes_per_sec": 0, 00:16:20.038 "r_mbytes_per_sec": 0, 00:16:20.038 "w_mbytes_per_sec": 0 00:16:20.038 }, 00:16:20.038 "claimed": true, 00:16:20.038 "claim_type": "exclusive_write", 00:16:20.038 "zoned": false, 00:16:20.038 "supported_io_types": { 00:16:20.038 "read": true, 00:16:20.038 "write": true, 00:16:20.038 "unmap": true, 00:16:20.038 "flush": true, 00:16:20.038 "reset": true, 00:16:20.038 "nvme_admin": false, 00:16:20.038 "nvme_io": false, 00:16:20.038 "nvme_io_md": false, 00:16:20.038 "write_zeroes": true, 00:16:20.038 "zcopy": true, 00:16:20.038 "get_zone_info": false, 00:16:20.038 "zone_management": false, 00:16:20.038 "zone_append": false, 00:16:20.038 "compare": false, 00:16:20.038 "compare_and_write": false, 00:16:20.038 "abort": true, 00:16:20.038 "seek_hole": false, 00:16:20.038 "seek_data": false, 00:16:20.038 "copy": true, 00:16:20.038 "nvme_iov_md": false 00:16:20.038 }, 00:16:20.038 "memory_domains": [ 00:16:20.038 { 00:16:20.038 "dma_device_id": "system", 00:16:20.038 "dma_device_type": 1 00:16:20.038 }, 00:16:20.038 { 00:16:20.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.038 "dma_device_type": 2 00:16:20.038 } 00:16:20.038 ], 00:16:20.038 "driver_specific": {} 00:16:20.038 } 00:16:20.038 ] 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.038 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.039 "name": "Existed_Raid", 00:16:20.039 "uuid": "0d42b5ab-ca9a-44c4-95c9-6953c7b1d526", 00:16:20.039 "strip_size_kb": 0, 00:16:20.039 "state": "configuring", 00:16:20.039 "raid_level": "raid1", 00:16:20.039 "superblock": true, 00:16:20.039 "num_base_bdevs": 4, 00:16:20.039 "num_base_bdevs_discovered": 3, 00:16:20.039 "num_base_bdevs_operational": 4, 00:16:20.039 "base_bdevs_list": [ 00:16:20.039 { 00:16:20.039 "name": "BaseBdev1", 00:16:20.039 "uuid": "130acdec-102b-4759-af33-5f9e52d2d4b8", 00:16:20.039 "is_configured": true, 00:16:20.039 "data_offset": 2048, 00:16:20.039 "data_size": 63488 00:16:20.039 }, 00:16:20.039 { 00:16:20.039 "name": "BaseBdev2", 00:16:20.039 "uuid": "d1bdebaa-4dfb-4f42-bdca-553d21b74983", 00:16:20.039 "is_configured": true, 00:16:20.039 "data_offset": 2048, 00:16:20.039 "data_size": 63488 00:16:20.039 }, 00:16:20.039 { 00:16:20.039 "name": "BaseBdev3", 00:16:20.039 "uuid": "f791a4dd-ad63-4d7d-a1f7-8ccfa2803a9a", 00:16:20.039 "is_configured": true, 00:16:20.039 "data_offset": 2048, 00:16:20.039 "data_size": 63488 00:16:20.039 }, 00:16:20.039 { 00:16:20.039 "name": "BaseBdev4", 00:16:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.039 "is_configured": false, 00:16:20.039 "data_offset": 0, 00:16:20.039 "data_size": 0 00:16:20.039 } 00:16:20.039 ] 00:16:20.039 }' 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.039 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.607 [2024-10-30 10:44:41.845623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:20.607 [2024-10-30 10:44:41.845951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:20.607 [2024-10-30 10:44:41.845970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:20.607 BaseBdev4 00:16:20.607 [2024-10-30 10:44:41.846386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:20.607 [2024-10-30 10:44:41.846623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:20.607 [2024-10-30 10:44:41.846654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:20.607 [2024-10-30 10:44:41.846833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.607 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.607 [ 00:16:20.607 { 00:16:20.607 "name": "BaseBdev4", 00:16:20.607 "aliases": [ 00:16:20.607 "0f650eed-cb5d-47d2-b4e6-2cd9414f6f83" 00:16:20.607 ], 00:16:20.607 "product_name": "Malloc disk", 00:16:20.607 "block_size": 512, 00:16:20.607 "num_blocks": 65536, 00:16:20.607 "uuid": "0f650eed-cb5d-47d2-b4e6-2cd9414f6f83", 00:16:20.607 "assigned_rate_limits": { 00:16:20.607 "rw_ios_per_sec": 0, 00:16:20.607 "rw_mbytes_per_sec": 0, 00:16:20.607 "r_mbytes_per_sec": 0, 00:16:20.607 "w_mbytes_per_sec": 0 00:16:20.607 }, 00:16:20.607 "claimed": true, 00:16:20.607 "claim_type": "exclusive_write", 00:16:20.607 "zoned": false, 00:16:20.607 "supported_io_types": { 00:16:20.607 "read": true, 00:16:20.607 "write": true, 00:16:20.607 "unmap": true, 00:16:20.607 "flush": true, 00:16:20.607 "reset": true, 00:16:20.607 "nvme_admin": false, 00:16:20.607 "nvme_io": false, 00:16:20.607 "nvme_io_md": false, 00:16:20.607 "write_zeroes": true, 00:16:20.607 "zcopy": true, 00:16:20.607 "get_zone_info": false, 00:16:20.607 "zone_management": false, 00:16:20.608 "zone_append": false, 00:16:20.608 "compare": false, 00:16:20.608 "compare_and_write": false, 00:16:20.608 "abort": true, 00:16:20.608 "seek_hole": false, 00:16:20.608 "seek_data": false, 00:16:20.608 "copy": true, 00:16:20.608 "nvme_iov_md": false 00:16:20.608 }, 00:16:20.608 "memory_domains": [ 00:16:20.608 { 00:16:20.608 "dma_device_id": "system", 00:16:20.608 "dma_device_type": 1 00:16:20.608 }, 00:16:20.608 { 00:16:20.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.608 "dma_device_type": 2 00:16:20.608 } 00:16:20.608 ], 00:16:20.608 "driver_specific": {} 00:16:20.608 } 00:16:20.608 ] 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.608 "name": "Existed_Raid", 00:16:20.608 "uuid": "0d42b5ab-ca9a-44c4-95c9-6953c7b1d526", 00:16:20.608 "strip_size_kb": 0, 00:16:20.608 "state": "online", 00:16:20.608 "raid_level": "raid1", 00:16:20.608 "superblock": true, 00:16:20.608 "num_base_bdevs": 4, 00:16:20.608 "num_base_bdevs_discovered": 4, 00:16:20.608 "num_base_bdevs_operational": 4, 00:16:20.608 "base_bdevs_list": [ 00:16:20.608 { 00:16:20.608 "name": "BaseBdev1", 00:16:20.608 "uuid": "130acdec-102b-4759-af33-5f9e52d2d4b8", 00:16:20.608 "is_configured": true, 00:16:20.608 "data_offset": 2048, 00:16:20.608 "data_size": 63488 00:16:20.608 }, 00:16:20.608 { 00:16:20.608 "name": "BaseBdev2", 00:16:20.608 "uuid": "d1bdebaa-4dfb-4f42-bdca-553d21b74983", 00:16:20.608 "is_configured": true, 00:16:20.608 "data_offset": 2048, 00:16:20.608 "data_size": 63488 00:16:20.608 }, 00:16:20.608 { 00:16:20.608 "name": "BaseBdev3", 00:16:20.608 "uuid": "f791a4dd-ad63-4d7d-a1f7-8ccfa2803a9a", 00:16:20.608 "is_configured": true, 00:16:20.608 "data_offset": 2048, 00:16:20.608 "data_size": 63488 00:16:20.608 }, 00:16:20.608 { 00:16:20.608 "name": "BaseBdev4", 00:16:20.608 "uuid": "0f650eed-cb5d-47d2-b4e6-2cd9414f6f83", 00:16:20.608 "is_configured": true, 00:16:20.608 "data_offset": 2048, 00:16:20.608 "data_size": 63488 00:16:20.608 } 00:16:20.608 ] 00:16:20.608 }' 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.608 10:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.175 [2024-10-30 10:44:42.394291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.175 "name": "Existed_Raid", 00:16:21.175 "aliases": [ 00:16:21.175 "0d42b5ab-ca9a-44c4-95c9-6953c7b1d526" 00:16:21.175 ], 00:16:21.175 "product_name": "Raid Volume", 00:16:21.175 "block_size": 512, 00:16:21.175 "num_blocks": 63488, 00:16:21.175 "uuid": "0d42b5ab-ca9a-44c4-95c9-6953c7b1d526", 00:16:21.175 "assigned_rate_limits": { 00:16:21.175 "rw_ios_per_sec": 0, 00:16:21.175 "rw_mbytes_per_sec": 0, 00:16:21.175 "r_mbytes_per_sec": 0, 00:16:21.175 "w_mbytes_per_sec": 0 00:16:21.175 }, 00:16:21.175 "claimed": false, 00:16:21.175 "zoned": false, 00:16:21.175 "supported_io_types": { 00:16:21.175 "read": true, 00:16:21.175 "write": true, 00:16:21.175 "unmap": false, 00:16:21.175 "flush": false, 00:16:21.175 "reset": true, 00:16:21.175 "nvme_admin": false, 00:16:21.175 "nvme_io": false, 00:16:21.175 "nvme_io_md": false, 00:16:21.175 "write_zeroes": true, 00:16:21.175 "zcopy": false, 00:16:21.175 "get_zone_info": false, 00:16:21.175 "zone_management": false, 00:16:21.175 "zone_append": false, 00:16:21.175 "compare": false, 00:16:21.175 "compare_and_write": false, 00:16:21.175 "abort": false, 00:16:21.175 "seek_hole": false, 00:16:21.175 "seek_data": false, 00:16:21.175 "copy": false, 00:16:21.175 "nvme_iov_md": false 00:16:21.175 }, 00:16:21.175 "memory_domains": [ 00:16:21.175 { 00:16:21.175 "dma_device_id": "system", 00:16:21.175 "dma_device_type": 1 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.175 "dma_device_type": 2 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "dma_device_id": "system", 00:16:21.175 "dma_device_type": 1 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.175 "dma_device_type": 2 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "dma_device_id": "system", 00:16:21.175 "dma_device_type": 1 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.175 "dma_device_type": 2 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "dma_device_id": "system", 00:16:21.175 "dma_device_type": 1 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.175 "dma_device_type": 2 00:16:21.175 } 00:16:21.175 ], 00:16:21.175 "driver_specific": { 00:16:21.175 "raid": { 00:16:21.175 "uuid": "0d42b5ab-ca9a-44c4-95c9-6953c7b1d526", 00:16:21.175 "strip_size_kb": 0, 00:16:21.175 "state": "online", 00:16:21.175 "raid_level": "raid1", 00:16:21.175 "superblock": true, 00:16:21.175 "num_base_bdevs": 4, 00:16:21.175 "num_base_bdevs_discovered": 4, 00:16:21.175 "num_base_bdevs_operational": 4, 00:16:21.175 "base_bdevs_list": [ 00:16:21.175 { 00:16:21.175 "name": "BaseBdev1", 00:16:21.175 "uuid": "130acdec-102b-4759-af33-5f9e52d2d4b8", 00:16:21.175 "is_configured": true, 00:16:21.175 "data_offset": 2048, 00:16:21.175 "data_size": 63488 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "name": "BaseBdev2", 00:16:21.175 "uuid": "d1bdebaa-4dfb-4f42-bdca-553d21b74983", 00:16:21.175 "is_configured": true, 00:16:21.175 "data_offset": 2048, 00:16:21.175 "data_size": 63488 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "name": "BaseBdev3", 00:16:21.175 "uuid": "f791a4dd-ad63-4d7d-a1f7-8ccfa2803a9a", 00:16:21.175 "is_configured": true, 00:16:21.175 "data_offset": 2048, 00:16:21.175 "data_size": 63488 00:16:21.175 }, 00:16:21.175 { 00:16:21.175 "name": "BaseBdev4", 00:16:21.175 "uuid": "0f650eed-cb5d-47d2-b4e6-2cd9414f6f83", 00:16:21.175 "is_configured": true, 00:16:21.175 "data_offset": 2048, 00:16:21.175 "data_size": 63488 00:16:21.175 } 00:16:21.175 ] 00:16:21.175 } 00:16:21.175 } 00:16:21.175 }' 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:21.175 BaseBdev2 00:16:21.175 BaseBdev3 00:16:21.175 BaseBdev4' 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.175 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.176 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.176 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.176 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:21.176 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.176 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.176 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.434 [2024-10-30 10:44:42.766061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.434 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.435 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.435 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.435 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.435 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.435 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.435 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.435 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.435 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.693 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.693 "name": "Existed_Raid", 00:16:21.693 "uuid": "0d42b5ab-ca9a-44c4-95c9-6953c7b1d526", 00:16:21.693 "strip_size_kb": 0, 00:16:21.693 "state": "online", 00:16:21.693 "raid_level": "raid1", 00:16:21.693 "superblock": true, 00:16:21.693 "num_base_bdevs": 4, 00:16:21.693 "num_base_bdevs_discovered": 3, 00:16:21.693 "num_base_bdevs_operational": 3, 00:16:21.693 "base_bdevs_list": [ 00:16:21.693 { 00:16:21.693 "name": null, 00:16:21.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.693 "is_configured": false, 00:16:21.693 "data_offset": 0, 00:16:21.693 "data_size": 63488 00:16:21.693 }, 00:16:21.693 { 00:16:21.693 "name": "BaseBdev2", 00:16:21.693 "uuid": "d1bdebaa-4dfb-4f42-bdca-553d21b74983", 00:16:21.693 "is_configured": true, 00:16:21.693 "data_offset": 2048, 00:16:21.693 "data_size": 63488 00:16:21.693 }, 00:16:21.693 { 00:16:21.693 "name": "BaseBdev3", 00:16:21.693 "uuid": "f791a4dd-ad63-4d7d-a1f7-8ccfa2803a9a", 00:16:21.693 "is_configured": true, 00:16:21.693 "data_offset": 2048, 00:16:21.693 "data_size": 63488 00:16:21.693 }, 00:16:21.693 { 00:16:21.693 "name": "BaseBdev4", 00:16:21.693 "uuid": "0f650eed-cb5d-47d2-b4e6-2cd9414f6f83", 00:16:21.693 "is_configured": true, 00:16:21.693 "data_offset": 2048, 00:16:21.693 "data_size": 63488 00:16:21.693 } 00:16:21.693 ] 00:16:21.693 }' 00:16:21.693 10:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.693 10:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.952 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:21.952 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.952 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.952 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:21.952 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.952 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.952 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.210 [2024-10-30 10:44:43.429654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.210 [2024-10-30 10:44:43.573129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:22.210 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.468 [2024-10-30 10:44:43.720883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:22.468 [2024-10-30 10:44:43.721181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.468 [2024-10-30 10:44:43.804757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.468 [2024-10-30 10:44:43.805060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.468 [2024-10-30 10:44:43.805105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.468 BaseBdev2 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.468 [ 00:16:22.468 { 00:16:22.468 "name": "BaseBdev2", 00:16:22.468 "aliases": [ 00:16:22.468 "2849c40c-798b-41ef-be8e-02dd8a07a262" 00:16:22.468 ], 00:16:22.468 "product_name": "Malloc disk", 00:16:22.468 "block_size": 512, 00:16:22.468 "num_blocks": 65536, 00:16:22.468 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:22.468 "assigned_rate_limits": { 00:16:22.468 "rw_ios_per_sec": 0, 00:16:22.468 "rw_mbytes_per_sec": 0, 00:16:22.468 "r_mbytes_per_sec": 0, 00:16:22.468 "w_mbytes_per_sec": 0 00:16:22.468 }, 00:16:22.468 "claimed": false, 00:16:22.468 "zoned": false, 00:16:22.468 "supported_io_types": { 00:16:22.468 "read": true, 00:16:22.468 "write": true, 00:16:22.468 "unmap": true, 00:16:22.468 "flush": true, 00:16:22.468 "reset": true, 00:16:22.468 "nvme_admin": false, 00:16:22.468 "nvme_io": false, 00:16:22.468 "nvme_io_md": false, 00:16:22.468 "write_zeroes": true, 00:16:22.468 "zcopy": true, 00:16:22.468 "get_zone_info": false, 00:16:22.468 "zone_management": false, 00:16:22.468 "zone_append": false, 00:16:22.468 "compare": false, 00:16:22.468 "compare_and_write": false, 00:16:22.468 "abort": true, 00:16:22.468 "seek_hole": false, 00:16:22.468 "seek_data": false, 00:16:22.468 "copy": true, 00:16:22.468 "nvme_iov_md": false 00:16:22.468 }, 00:16:22.468 "memory_domains": [ 00:16:22.468 { 00:16:22.468 "dma_device_id": "system", 00:16:22.468 "dma_device_type": 1 00:16:22.468 }, 00:16:22.468 { 00:16:22.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.468 "dma_device_type": 2 00:16:22.468 } 00:16:22.468 ], 00:16:22.468 "driver_specific": {} 00:16:22.468 } 00:16:22.468 ] 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.468 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.755 BaseBdev3 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.755 10:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.755 [ 00:16:22.755 { 00:16:22.755 "name": "BaseBdev3", 00:16:22.755 "aliases": [ 00:16:22.755 "867aed39-762d-4e03-a9be-5b57a517ab1f" 00:16:22.755 ], 00:16:22.755 "product_name": "Malloc disk", 00:16:22.755 "block_size": 512, 00:16:22.755 "num_blocks": 65536, 00:16:22.755 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:22.755 "assigned_rate_limits": { 00:16:22.755 "rw_ios_per_sec": 0, 00:16:22.755 "rw_mbytes_per_sec": 0, 00:16:22.755 "r_mbytes_per_sec": 0, 00:16:22.755 "w_mbytes_per_sec": 0 00:16:22.755 }, 00:16:22.755 "claimed": false, 00:16:22.755 "zoned": false, 00:16:22.755 "supported_io_types": { 00:16:22.755 "read": true, 00:16:22.755 "write": true, 00:16:22.755 "unmap": true, 00:16:22.755 "flush": true, 00:16:22.755 "reset": true, 00:16:22.755 "nvme_admin": false, 00:16:22.755 "nvme_io": false, 00:16:22.755 "nvme_io_md": false, 00:16:22.755 "write_zeroes": true, 00:16:22.755 "zcopy": true, 00:16:22.755 "get_zone_info": false, 00:16:22.755 "zone_management": false, 00:16:22.755 "zone_append": false, 00:16:22.755 "compare": false, 00:16:22.755 "compare_and_write": false, 00:16:22.755 "abort": true, 00:16:22.755 "seek_hole": false, 00:16:22.755 "seek_data": false, 00:16:22.755 "copy": true, 00:16:22.755 "nvme_iov_md": false 00:16:22.755 }, 00:16:22.755 "memory_domains": [ 00:16:22.755 { 00:16:22.755 "dma_device_id": "system", 00:16:22.755 "dma_device_type": 1 00:16:22.755 }, 00:16:22.755 { 00:16:22.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.755 "dma_device_type": 2 00:16:22.755 } 00:16:22.755 ], 00:16:22.755 "driver_specific": {} 00:16:22.755 } 00:16:22.755 ] 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.755 BaseBdev4 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.755 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.755 [ 00:16:22.755 { 00:16:22.755 "name": "BaseBdev4", 00:16:22.755 "aliases": [ 00:16:22.755 "dea3b265-99ba-4cae-b842-322245b4c06b" 00:16:22.755 ], 00:16:22.755 "product_name": "Malloc disk", 00:16:22.755 "block_size": 512, 00:16:22.755 "num_blocks": 65536, 00:16:22.755 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:22.755 "assigned_rate_limits": { 00:16:22.755 "rw_ios_per_sec": 0, 00:16:22.755 "rw_mbytes_per_sec": 0, 00:16:22.755 "r_mbytes_per_sec": 0, 00:16:22.755 "w_mbytes_per_sec": 0 00:16:22.755 }, 00:16:22.755 "claimed": false, 00:16:22.755 "zoned": false, 00:16:22.755 "supported_io_types": { 00:16:22.755 "read": true, 00:16:22.755 "write": true, 00:16:22.755 "unmap": true, 00:16:22.755 "flush": true, 00:16:22.755 "reset": true, 00:16:22.755 "nvme_admin": false, 00:16:22.755 "nvme_io": false, 00:16:22.755 "nvme_io_md": false, 00:16:22.755 "write_zeroes": true, 00:16:22.755 "zcopy": true, 00:16:22.755 "get_zone_info": false, 00:16:22.755 "zone_management": false, 00:16:22.755 "zone_append": false, 00:16:22.755 "compare": false, 00:16:22.755 "compare_and_write": false, 00:16:22.755 "abort": true, 00:16:22.755 "seek_hole": false, 00:16:22.755 "seek_data": false, 00:16:22.755 "copy": true, 00:16:22.755 "nvme_iov_md": false 00:16:22.755 }, 00:16:22.755 "memory_domains": [ 00:16:22.755 { 00:16:22.755 "dma_device_id": "system", 00:16:22.755 "dma_device_type": 1 00:16:22.755 }, 00:16:22.755 { 00:16:22.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.755 "dma_device_type": 2 00:16:22.755 } 00:16:22.755 ], 00:16:22.755 "driver_specific": {} 00:16:22.755 } 00:16:22.756 ] 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.756 [2024-10-30 10:44:44.094940] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.756 [2024-10-30 10:44:44.095158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.756 [2024-10-30 10:44:44.095287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.756 [2024-10-30 10:44:44.097852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:22.756 [2024-10-30 10:44:44.098066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.756 "name": "Existed_Raid", 00:16:22.756 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:22.756 "strip_size_kb": 0, 00:16:22.756 "state": "configuring", 00:16:22.756 "raid_level": "raid1", 00:16:22.756 "superblock": true, 00:16:22.756 "num_base_bdevs": 4, 00:16:22.756 "num_base_bdevs_discovered": 3, 00:16:22.756 "num_base_bdevs_operational": 4, 00:16:22.756 "base_bdevs_list": [ 00:16:22.756 { 00:16:22.756 "name": "BaseBdev1", 00:16:22.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.756 "is_configured": false, 00:16:22.756 "data_offset": 0, 00:16:22.756 "data_size": 0 00:16:22.756 }, 00:16:22.756 { 00:16:22.756 "name": "BaseBdev2", 00:16:22.756 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:22.756 "is_configured": true, 00:16:22.756 "data_offset": 2048, 00:16:22.756 "data_size": 63488 00:16:22.756 }, 00:16:22.756 { 00:16:22.756 "name": "BaseBdev3", 00:16:22.756 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:22.756 "is_configured": true, 00:16:22.756 "data_offset": 2048, 00:16:22.756 "data_size": 63488 00:16:22.756 }, 00:16:22.756 { 00:16:22.756 "name": "BaseBdev4", 00:16:22.756 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:22.756 "is_configured": true, 00:16:22.756 "data_offset": 2048, 00:16:22.756 "data_size": 63488 00:16:22.756 } 00:16:22.756 ] 00:16:22.756 }' 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.756 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.322 [2024-10-30 10:44:44.623231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.322 "name": "Existed_Raid", 00:16:23.322 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:23.322 "strip_size_kb": 0, 00:16:23.322 "state": "configuring", 00:16:23.322 "raid_level": "raid1", 00:16:23.322 "superblock": true, 00:16:23.322 "num_base_bdevs": 4, 00:16:23.322 "num_base_bdevs_discovered": 2, 00:16:23.322 "num_base_bdevs_operational": 4, 00:16:23.322 "base_bdevs_list": [ 00:16:23.322 { 00:16:23.322 "name": "BaseBdev1", 00:16:23.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.322 "is_configured": false, 00:16:23.322 "data_offset": 0, 00:16:23.322 "data_size": 0 00:16:23.322 }, 00:16:23.322 { 00:16:23.322 "name": null, 00:16:23.322 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:23.322 "is_configured": false, 00:16:23.322 "data_offset": 0, 00:16:23.322 "data_size": 63488 00:16:23.322 }, 00:16:23.322 { 00:16:23.322 "name": "BaseBdev3", 00:16:23.322 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:23.322 "is_configured": true, 00:16:23.322 "data_offset": 2048, 00:16:23.322 "data_size": 63488 00:16:23.322 }, 00:16:23.322 { 00:16:23.322 "name": "BaseBdev4", 00:16:23.322 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:23.322 "is_configured": true, 00:16:23.322 "data_offset": 2048, 00:16:23.322 "data_size": 63488 00:16:23.322 } 00:16:23.322 ] 00:16:23.322 }' 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.322 10:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.887 [2024-10-30 10:44:45.249154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.887 BaseBdev1 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.887 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.887 [ 00:16:23.887 { 00:16:23.887 "name": "BaseBdev1", 00:16:23.887 "aliases": [ 00:16:23.887 "9ce29a40-367d-4ca1-a627-16da5080f176" 00:16:23.887 ], 00:16:23.887 "product_name": "Malloc disk", 00:16:23.887 "block_size": 512, 00:16:23.887 "num_blocks": 65536, 00:16:23.887 "uuid": "9ce29a40-367d-4ca1-a627-16da5080f176", 00:16:23.888 "assigned_rate_limits": { 00:16:23.888 "rw_ios_per_sec": 0, 00:16:23.888 "rw_mbytes_per_sec": 0, 00:16:23.888 "r_mbytes_per_sec": 0, 00:16:23.888 "w_mbytes_per_sec": 0 00:16:23.888 }, 00:16:23.888 "claimed": true, 00:16:23.888 "claim_type": "exclusive_write", 00:16:23.888 "zoned": false, 00:16:23.888 "supported_io_types": { 00:16:23.888 "read": true, 00:16:23.888 "write": true, 00:16:23.888 "unmap": true, 00:16:23.888 "flush": true, 00:16:23.888 "reset": true, 00:16:23.888 "nvme_admin": false, 00:16:23.888 "nvme_io": false, 00:16:23.888 "nvme_io_md": false, 00:16:23.888 "write_zeroes": true, 00:16:23.888 "zcopy": true, 00:16:23.888 "get_zone_info": false, 00:16:23.888 "zone_management": false, 00:16:23.888 "zone_append": false, 00:16:23.888 "compare": false, 00:16:23.888 "compare_and_write": false, 00:16:23.888 "abort": true, 00:16:23.888 "seek_hole": false, 00:16:23.888 "seek_data": false, 00:16:23.888 "copy": true, 00:16:23.888 "nvme_iov_md": false 00:16:23.888 }, 00:16:23.888 "memory_domains": [ 00:16:23.888 { 00:16:23.888 "dma_device_id": "system", 00:16:23.888 "dma_device_type": 1 00:16:23.888 }, 00:16:23.888 { 00:16:23.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.888 "dma_device_type": 2 00:16:23.888 } 00:16:23.888 ], 00:16:23.888 "driver_specific": {} 00:16:23.888 } 00:16:23.888 ] 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.888 "name": "Existed_Raid", 00:16:23.888 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:23.888 "strip_size_kb": 0, 00:16:23.888 "state": "configuring", 00:16:23.888 "raid_level": "raid1", 00:16:23.888 "superblock": true, 00:16:23.888 "num_base_bdevs": 4, 00:16:23.888 "num_base_bdevs_discovered": 3, 00:16:23.888 "num_base_bdevs_operational": 4, 00:16:23.888 "base_bdevs_list": [ 00:16:23.888 { 00:16:23.888 "name": "BaseBdev1", 00:16:23.888 "uuid": "9ce29a40-367d-4ca1-a627-16da5080f176", 00:16:23.888 "is_configured": true, 00:16:23.888 "data_offset": 2048, 00:16:23.888 "data_size": 63488 00:16:23.888 }, 00:16:23.888 { 00:16:23.888 "name": null, 00:16:23.888 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:23.888 "is_configured": false, 00:16:23.888 "data_offset": 0, 00:16:23.888 "data_size": 63488 00:16:23.888 }, 00:16:23.888 { 00:16:23.888 "name": "BaseBdev3", 00:16:23.888 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:23.888 "is_configured": true, 00:16:23.888 "data_offset": 2048, 00:16:23.888 "data_size": 63488 00:16:23.888 }, 00:16:23.888 { 00:16:23.888 "name": "BaseBdev4", 00:16:23.888 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:23.888 "is_configured": true, 00:16:23.888 "data_offset": 2048, 00:16:23.888 "data_size": 63488 00:16:23.888 } 00:16:23.888 ] 00:16:23.888 }' 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.888 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.453 [2024-10-30 10:44:45.841434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.453 "name": "Existed_Raid", 00:16:24.453 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:24.453 "strip_size_kb": 0, 00:16:24.453 "state": "configuring", 00:16:24.453 "raid_level": "raid1", 00:16:24.453 "superblock": true, 00:16:24.453 "num_base_bdevs": 4, 00:16:24.453 "num_base_bdevs_discovered": 2, 00:16:24.453 "num_base_bdevs_operational": 4, 00:16:24.453 "base_bdevs_list": [ 00:16:24.453 { 00:16:24.453 "name": "BaseBdev1", 00:16:24.453 "uuid": "9ce29a40-367d-4ca1-a627-16da5080f176", 00:16:24.453 "is_configured": true, 00:16:24.453 "data_offset": 2048, 00:16:24.453 "data_size": 63488 00:16:24.453 }, 00:16:24.453 { 00:16:24.453 "name": null, 00:16:24.453 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:24.453 "is_configured": false, 00:16:24.453 "data_offset": 0, 00:16:24.453 "data_size": 63488 00:16:24.453 }, 00:16:24.453 { 00:16:24.453 "name": null, 00:16:24.453 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:24.453 "is_configured": false, 00:16:24.453 "data_offset": 0, 00:16:24.453 "data_size": 63488 00:16:24.453 }, 00:16:24.453 { 00:16:24.453 "name": "BaseBdev4", 00:16:24.453 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:24.453 "is_configured": true, 00:16:24.453 "data_offset": 2048, 00:16:24.453 "data_size": 63488 00:16:24.453 } 00:16:24.453 ] 00:16:24.453 }' 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.453 10:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.017 [2024-10-30 10:44:46.417597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.017 "name": "Existed_Raid", 00:16:25.017 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:25.017 "strip_size_kb": 0, 00:16:25.017 "state": "configuring", 00:16:25.017 "raid_level": "raid1", 00:16:25.017 "superblock": true, 00:16:25.017 "num_base_bdevs": 4, 00:16:25.017 "num_base_bdevs_discovered": 3, 00:16:25.017 "num_base_bdevs_operational": 4, 00:16:25.017 "base_bdevs_list": [ 00:16:25.017 { 00:16:25.017 "name": "BaseBdev1", 00:16:25.017 "uuid": "9ce29a40-367d-4ca1-a627-16da5080f176", 00:16:25.017 "is_configured": true, 00:16:25.017 "data_offset": 2048, 00:16:25.017 "data_size": 63488 00:16:25.017 }, 00:16:25.017 { 00:16:25.017 "name": null, 00:16:25.017 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:25.017 "is_configured": false, 00:16:25.017 "data_offset": 0, 00:16:25.017 "data_size": 63488 00:16:25.017 }, 00:16:25.017 { 00:16:25.017 "name": "BaseBdev3", 00:16:25.017 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:25.017 "is_configured": true, 00:16:25.017 "data_offset": 2048, 00:16:25.017 "data_size": 63488 00:16:25.017 }, 00:16:25.017 { 00:16:25.017 "name": "BaseBdev4", 00:16:25.017 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:25.017 "is_configured": true, 00:16:25.017 "data_offset": 2048, 00:16:25.017 "data_size": 63488 00:16:25.017 } 00:16:25.017 ] 00:16:25.017 }' 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.017 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.584 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:25.584 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.584 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.584 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.584 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.584 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:25.584 10:44:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:25.584 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.584 10:44:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.584 [2024-10-30 10:44:46.985844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.841 "name": "Existed_Raid", 00:16:25.841 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:25.841 "strip_size_kb": 0, 00:16:25.841 "state": "configuring", 00:16:25.841 "raid_level": "raid1", 00:16:25.841 "superblock": true, 00:16:25.841 "num_base_bdevs": 4, 00:16:25.841 "num_base_bdevs_discovered": 2, 00:16:25.841 "num_base_bdevs_operational": 4, 00:16:25.841 "base_bdevs_list": [ 00:16:25.841 { 00:16:25.841 "name": null, 00:16:25.841 "uuid": "9ce29a40-367d-4ca1-a627-16da5080f176", 00:16:25.841 "is_configured": false, 00:16:25.841 "data_offset": 0, 00:16:25.841 "data_size": 63488 00:16:25.841 }, 00:16:25.841 { 00:16:25.841 "name": null, 00:16:25.841 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:25.841 "is_configured": false, 00:16:25.841 "data_offset": 0, 00:16:25.841 "data_size": 63488 00:16:25.841 }, 00:16:25.841 { 00:16:25.841 "name": "BaseBdev3", 00:16:25.841 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:25.841 "is_configured": true, 00:16:25.841 "data_offset": 2048, 00:16:25.841 "data_size": 63488 00:16:25.841 }, 00:16:25.841 { 00:16:25.841 "name": "BaseBdev4", 00:16:25.841 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:25.841 "is_configured": true, 00:16:25.841 "data_offset": 2048, 00:16:25.841 "data_size": 63488 00:16:25.841 } 00:16:25.841 ] 00:16:25.841 }' 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.841 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.407 [2024-10-30 10:44:47.648165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.407 "name": "Existed_Raid", 00:16:26.407 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:26.407 "strip_size_kb": 0, 00:16:26.407 "state": "configuring", 00:16:26.407 "raid_level": "raid1", 00:16:26.407 "superblock": true, 00:16:26.407 "num_base_bdevs": 4, 00:16:26.407 "num_base_bdevs_discovered": 3, 00:16:26.407 "num_base_bdevs_operational": 4, 00:16:26.407 "base_bdevs_list": [ 00:16:26.407 { 00:16:26.407 "name": null, 00:16:26.407 "uuid": "9ce29a40-367d-4ca1-a627-16da5080f176", 00:16:26.407 "is_configured": false, 00:16:26.407 "data_offset": 0, 00:16:26.407 "data_size": 63488 00:16:26.407 }, 00:16:26.407 { 00:16:26.407 "name": "BaseBdev2", 00:16:26.407 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:26.407 "is_configured": true, 00:16:26.407 "data_offset": 2048, 00:16:26.407 "data_size": 63488 00:16:26.407 }, 00:16:26.407 { 00:16:26.407 "name": "BaseBdev3", 00:16:26.407 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:26.407 "is_configured": true, 00:16:26.407 "data_offset": 2048, 00:16:26.407 "data_size": 63488 00:16:26.407 }, 00:16:26.407 { 00:16:26.407 "name": "BaseBdev4", 00:16:26.407 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:26.407 "is_configured": true, 00:16:26.407 "data_offset": 2048, 00:16:26.407 "data_size": 63488 00:16:26.407 } 00:16:26.407 ] 00:16:26.407 }' 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.407 10:44:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9ce29a40-367d-4ca1-a627-16da5080f176 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.976 [2024-10-30 10:44:48.338791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:26.976 NewBaseBdev 00:16:26.976 [2024-10-30 10:44:48.339478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:26.976 [2024-10-30 10:44:48.339514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:26.976 [2024-10-30 10:44:48.339916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:26.976 [2024-10-30 10:44:48.340158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:26.976 [2024-10-30 10:44:48.340176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:26.976 [2024-10-30 10:44:48.340363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.976 [ 00:16:26.976 { 00:16:26.976 "name": "NewBaseBdev", 00:16:26.976 "aliases": [ 00:16:26.976 "9ce29a40-367d-4ca1-a627-16da5080f176" 00:16:26.976 ], 00:16:26.976 "product_name": "Malloc disk", 00:16:26.976 "block_size": 512, 00:16:26.976 "num_blocks": 65536, 00:16:26.976 "uuid": "9ce29a40-367d-4ca1-a627-16da5080f176", 00:16:26.976 "assigned_rate_limits": { 00:16:26.976 "rw_ios_per_sec": 0, 00:16:26.976 "rw_mbytes_per_sec": 0, 00:16:26.976 "r_mbytes_per_sec": 0, 00:16:26.976 "w_mbytes_per_sec": 0 00:16:26.976 }, 00:16:26.976 "claimed": true, 00:16:26.976 "claim_type": "exclusive_write", 00:16:26.976 "zoned": false, 00:16:26.976 "supported_io_types": { 00:16:26.976 "read": true, 00:16:26.976 "write": true, 00:16:26.976 "unmap": true, 00:16:26.976 "flush": true, 00:16:26.976 "reset": true, 00:16:26.976 "nvme_admin": false, 00:16:26.976 "nvme_io": false, 00:16:26.976 "nvme_io_md": false, 00:16:26.976 "write_zeroes": true, 00:16:26.976 "zcopy": true, 00:16:26.976 "get_zone_info": false, 00:16:26.976 "zone_management": false, 00:16:26.976 "zone_append": false, 00:16:26.976 "compare": false, 00:16:26.976 "compare_and_write": false, 00:16:26.976 "abort": true, 00:16:26.976 "seek_hole": false, 00:16:26.976 "seek_data": false, 00:16:26.976 "copy": true, 00:16:26.976 "nvme_iov_md": false 00:16:26.976 }, 00:16:26.976 "memory_domains": [ 00:16:26.976 { 00:16:26.976 "dma_device_id": "system", 00:16:26.976 "dma_device_type": 1 00:16:26.976 }, 00:16:26.976 { 00:16:26.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.976 "dma_device_type": 2 00:16:26.976 } 00:16:26.976 ], 00:16:26.976 "driver_specific": {} 00:16:26.976 } 00:16:26.976 ] 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.976 "name": "Existed_Raid", 00:16:26.976 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:26.976 "strip_size_kb": 0, 00:16:26.976 "state": "online", 00:16:26.976 "raid_level": "raid1", 00:16:26.976 "superblock": true, 00:16:26.976 "num_base_bdevs": 4, 00:16:26.976 "num_base_bdevs_discovered": 4, 00:16:26.976 "num_base_bdevs_operational": 4, 00:16:26.976 "base_bdevs_list": [ 00:16:26.976 { 00:16:26.976 "name": "NewBaseBdev", 00:16:26.976 "uuid": "9ce29a40-367d-4ca1-a627-16da5080f176", 00:16:26.976 "is_configured": true, 00:16:26.976 "data_offset": 2048, 00:16:26.976 "data_size": 63488 00:16:26.976 }, 00:16:26.976 { 00:16:26.976 "name": "BaseBdev2", 00:16:26.976 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:26.976 "is_configured": true, 00:16:26.976 "data_offset": 2048, 00:16:26.976 "data_size": 63488 00:16:26.976 }, 00:16:26.976 { 00:16:26.976 "name": "BaseBdev3", 00:16:26.976 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:26.976 "is_configured": true, 00:16:26.976 "data_offset": 2048, 00:16:26.976 "data_size": 63488 00:16:26.976 }, 00:16:26.976 { 00:16:26.976 "name": "BaseBdev4", 00:16:26.976 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:26.976 "is_configured": true, 00:16:26.976 "data_offset": 2048, 00:16:26.976 "data_size": 63488 00:16:26.976 } 00:16:26.976 ] 00:16:26.976 }' 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.976 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.631 [2024-10-30 10:44:48.871577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.631 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.631 "name": "Existed_Raid", 00:16:27.631 "aliases": [ 00:16:27.631 "4ec542e6-31a9-42be-a644-7c1b3e1d3c40" 00:16:27.631 ], 00:16:27.631 "product_name": "Raid Volume", 00:16:27.631 "block_size": 512, 00:16:27.631 "num_blocks": 63488, 00:16:27.631 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:27.631 "assigned_rate_limits": { 00:16:27.631 "rw_ios_per_sec": 0, 00:16:27.631 "rw_mbytes_per_sec": 0, 00:16:27.631 "r_mbytes_per_sec": 0, 00:16:27.631 "w_mbytes_per_sec": 0 00:16:27.631 }, 00:16:27.631 "claimed": false, 00:16:27.631 "zoned": false, 00:16:27.631 "supported_io_types": { 00:16:27.631 "read": true, 00:16:27.631 "write": true, 00:16:27.631 "unmap": false, 00:16:27.631 "flush": false, 00:16:27.631 "reset": true, 00:16:27.631 "nvme_admin": false, 00:16:27.631 "nvme_io": false, 00:16:27.631 "nvme_io_md": false, 00:16:27.632 "write_zeroes": true, 00:16:27.632 "zcopy": false, 00:16:27.632 "get_zone_info": false, 00:16:27.632 "zone_management": false, 00:16:27.632 "zone_append": false, 00:16:27.632 "compare": false, 00:16:27.632 "compare_and_write": false, 00:16:27.632 "abort": false, 00:16:27.632 "seek_hole": false, 00:16:27.632 "seek_data": false, 00:16:27.632 "copy": false, 00:16:27.632 "nvme_iov_md": false 00:16:27.632 }, 00:16:27.632 "memory_domains": [ 00:16:27.632 { 00:16:27.632 "dma_device_id": "system", 00:16:27.632 "dma_device_type": 1 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.632 "dma_device_type": 2 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "dma_device_id": "system", 00:16:27.632 "dma_device_type": 1 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.632 "dma_device_type": 2 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "dma_device_id": "system", 00:16:27.632 "dma_device_type": 1 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.632 "dma_device_type": 2 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "dma_device_id": "system", 00:16:27.632 "dma_device_type": 1 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.632 "dma_device_type": 2 00:16:27.632 } 00:16:27.632 ], 00:16:27.632 "driver_specific": { 00:16:27.632 "raid": { 00:16:27.632 "uuid": "4ec542e6-31a9-42be-a644-7c1b3e1d3c40", 00:16:27.632 "strip_size_kb": 0, 00:16:27.632 "state": "online", 00:16:27.632 "raid_level": "raid1", 00:16:27.632 "superblock": true, 00:16:27.632 "num_base_bdevs": 4, 00:16:27.632 "num_base_bdevs_discovered": 4, 00:16:27.632 "num_base_bdevs_operational": 4, 00:16:27.632 "base_bdevs_list": [ 00:16:27.632 { 00:16:27.632 "name": "NewBaseBdev", 00:16:27.632 "uuid": "9ce29a40-367d-4ca1-a627-16da5080f176", 00:16:27.632 "is_configured": true, 00:16:27.632 "data_offset": 2048, 00:16:27.632 "data_size": 63488 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "name": "BaseBdev2", 00:16:27.632 "uuid": "2849c40c-798b-41ef-be8e-02dd8a07a262", 00:16:27.632 "is_configured": true, 00:16:27.632 "data_offset": 2048, 00:16:27.632 "data_size": 63488 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "name": "BaseBdev3", 00:16:27.632 "uuid": "867aed39-762d-4e03-a9be-5b57a517ab1f", 00:16:27.632 "is_configured": true, 00:16:27.632 "data_offset": 2048, 00:16:27.632 "data_size": 63488 00:16:27.632 }, 00:16:27.632 { 00:16:27.632 "name": "BaseBdev4", 00:16:27.632 "uuid": "dea3b265-99ba-4cae-b842-322245b4c06b", 00:16:27.632 "is_configured": true, 00:16:27.632 "data_offset": 2048, 00:16:27.632 "data_size": 63488 00:16:27.632 } 00:16:27.632 ] 00:16:27.632 } 00:16:27.632 } 00:16:27.632 }' 00:16:27.632 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.632 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:27.632 BaseBdev2 00:16:27.632 BaseBdev3 00:16:27.632 BaseBdev4' 00:16:27.632 10:44:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.632 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.891 [2024-10-30 10:44:49.243142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.891 [2024-10-30 10:44:49.243345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.891 [2024-10-30 10:44:49.243568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.891 [2024-10-30 10:44:49.244032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.891 [2024-10-30 10:44:49.244058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74172 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74172 ']' 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74172 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74172 00:16:27.891 killing process with pid 74172 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74172' 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74172 00:16:27.891 [2024-10-30 10:44:49.280329] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.891 10:44:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74172 00:16:28.459 [2024-10-30 10:44:49.663557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.393 10:44:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:29.393 00:16:29.393 real 0m12.897s 00:16:29.393 user 0m21.313s 00:16:29.393 sys 0m1.825s 00:16:29.393 10:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:29.393 10:44:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.393 ************************************ 00:16:29.393 END TEST raid_state_function_test_sb 00:16:29.393 ************************************ 00:16:29.393 10:44:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:29.393 10:44:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:29.393 10:44:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:29.393 10:44:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.393 ************************************ 00:16:29.393 START TEST raid_superblock_test 00:16:29.393 ************************************ 00:16:29.393 10:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:16:29.393 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:29.393 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74854 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74854 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74854 ']' 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:29.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:29.394 10:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.652 [2024-10-30 10:44:50.950523] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:16:29.652 [2024-10-30 10:44:50.950692] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74854 ] 00:16:29.910 [2024-10-30 10:44:51.137707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.910 [2024-10-30 10:44:51.319342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.169 [2024-10-30 10:44:51.537494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.169 [2024-10-30 10:44:51.537794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.738 malloc1 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.738 [2024-10-30 10:44:51.971187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:30.738 [2024-10-30 10:44:51.971422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.738 [2024-10-30 10:44:51.971502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.738 [2024-10-30 10:44:51.971625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.738 [2024-10-30 10:44:51.974617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.738 [2024-10-30 10:44:51.974803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:30.738 pt1 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.738 10:44:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.738 malloc2 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.738 [2024-10-30 10:44:52.027321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.738 [2024-10-30 10:44:52.027578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.738 [2024-10-30 10:44:52.027657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.738 [2024-10-30 10:44:52.027765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.738 [2024-10-30 10:44:52.030694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.738 [2024-10-30 10:44:52.030876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.738 pt2 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.738 malloc3 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.738 [2024-10-30 10:44:52.104228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:30.738 [2024-10-30 10:44:52.104468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.738 [2024-10-30 10:44:52.104576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:30.738 [2024-10-30 10:44:52.104815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.738 [2024-10-30 10:44:52.108398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.738 [2024-10-30 10:44:52.108597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:30.738 pt3 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.738 malloc4 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.738 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.738 [2024-10-30 10:44:52.163511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:30.738 [2024-10-30 10:44:52.163709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.738 [2024-10-30 10:44:52.163784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:30.738 [2024-10-30 10:44:52.164029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.738 [2024-10-30 10:44:52.166891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.738 [2024-10-30 10:44:52.167054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:30.738 pt4 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.739 [2024-10-30 10:44:52.175557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:30.739 [2024-10-30 10:44:52.178057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.739 [2024-10-30 10:44:52.178164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:30.739 [2024-10-30 10:44:52.178236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:30.739 [2024-10-30 10:44:52.178500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:30.739 [2024-10-30 10:44:52.178525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:30.739 [2024-10-30 10:44:52.178852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:30.739 [2024-10-30 10:44:52.179138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:30.739 [2024-10-30 10:44:52.179174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:30.739 [2024-10-30 10:44:52.179351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.739 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.998 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.998 "name": "raid_bdev1", 00:16:30.998 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:30.998 "strip_size_kb": 0, 00:16:30.998 "state": "online", 00:16:30.998 "raid_level": "raid1", 00:16:30.998 "superblock": true, 00:16:30.998 "num_base_bdevs": 4, 00:16:30.998 "num_base_bdevs_discovered": 4, 00:16:30.998 "num_base_bdevs_operational": 4, 00:16:30.998 "base_bdevs_list": [ 00:16:30.998 { 00:16:30.998 "name": "pt1", 00:16:30.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.998 "is_configured": true, 00:16:30.998 "data_offset": 2048, 00:16:30.998 "data_size": 63488 00:16:30.998 }, 00:16:30.998 { 00:16:30.998 "name": "pt2", 00:16:30.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.998 "is_configured": true, 00:16:30.998 "data_offset": 2048, 00:16:30.998 "data_size": 63488 00:16:30.998 }, 00:16:30.998 { 00:16:30.998 "name": "pt3", 00:16:30.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.998 "is_configured": true, 00:16:30.998 "data_offset": 2048, 00:16:30.998 "data_size": 63488 00:16:30.998 }, 00:16:30.998 { 00:16:30.998 "name": "pt4", 00:16:30.998 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.998 "is_configured": true, 00:16:30.998 "data_offset": 2048, 00:16:30.998 "data_size": 63488 00:16:30.998 } 00:16:30.998 ] 00:16:30.998 }' 00:16:30.998 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.998 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.276 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.276 [2024-10-30 10:44:52.724197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.564 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.564 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.564 "name": "raid_bdev1", 00:16:31.564 "aliases": [ 00:16:31.564 "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce" 00:16:31.564 ], 00:16:31.564 "product_name": "Raid Volume", 00:16:31.564 "block_size": 512, 00:16:31.564 "num_blocks": 63488, 00:16:31.564 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:31.564 "assigned_rate_limits": { 00:16:31.564 "rw_ios_per_sec": 0, 00:16:31.564 "rw_mbytes_per_sec": 0, 00:16:31.564 "r_mbytes_per_sec": 0, 00:16:31.564 "w_mbytes_per_sec": 0 00:16:31.564 }, 00:16:31.564 "claimed": false, 00:16:31.564 "zoned": false, 00:16:31.564 "supported_io_types": { 00:16:31.564 "read": true, 00:16:31.564 "write": true, 00:16:31.564 "unmap": false, 00:16:31.564 "flush": false, 00:16:31.564 "reset": true, 00:16:31.564 "nvme_admin": false, 00:16:31.564 "nvme_io": false, 00:16:31.564 "nvme_io_md": false, 00:16:31.564 "write_zeroes": true, 00:16:31.564 "zcopy": false, 00:16:31.564 "get_zone_info": false, 00:16:31.564 "zone_management": false, 00:16:31.564 "zone_append": false, 00:16:31.564 "compare": false, 00:16:31.564 "compare_and_write": false, 00:16:31.564 "abort": false, 00:16:31.564 "seek_hole": false, 00:16:31.564 "seek_data": false, 00:16:31.564 "copy": false, 00:16:31.564 "nvme_iov_md": false 00:16:31.564 }, 00:16:31.564 "memory_domains": [ 00:16:31.564 { 00:16:31.564 "dma_device_id": "system", 00:16:31.564 "dma_device_type": 1 00:16:31.564 }, 00:16:31.564 { 00:16:31.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.564 "dma_device_type": 2 00:16:31.564 }, 00:16:31.564 { 00:16:31.565 "dma_device_id": "system", 00:16:31.565 "dma_device_type": 1 00:16:31.565 }, 00:16:31.565 { 00:16:31.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.565 "dma_device_type": 2 00:16:31.565 }, 00:16:31.565 { 00:16:31.565 "dma_device_id": "system", 00:16:31.565 "dma_device_type": 1 00:16:31.565 }, 00:16:31.565 { 00:16:31.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.565 "dma_device_type": 2 00:16:31.565 }, 00:16:31.565 { 00:16:31.565 "dma_device_id": "system", 00:16:31.565 "dma_device_type": 1 00:16:31.565 }, 00:16:31.565 { 00:16:31.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.565 "dma_device_type": 2 00:16:31.565 } 00:16:31.565 ], 00:16:31.565 "driver_specific": { 00:16:31.565 "raid": { 00:16:31.565 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:31.565 "strip_size_kb": 0, 00:16:31.565 "state": "online", 00:16:31.565 "raid_level": "raid1", 00:16:31.565 "superblock": true, 00:16:31.565 "num_base_bdevs": 4, 00:16:31.565 "num_base_bdevs_discovered": 4, 00:16:31.565 "num_base_bdevs_operational": 4, 00:16:31.565 "base_bdevs_list": [ 00:16:31.565 { 00:16:31.565 "name": "pt1", 00:16:31.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.565 "is_configured": true, 00:16:31.565 "data_offset": 2048, 00:16:31.565 "data_size": 63488 00:16:31.565 }, 00:16:31.565 { 00:16:31.565 "name": "pt2", 00:16:31.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.565 "is_configured": true, 00:16:31.565 "data_offset": 2048, 00:16:31.565 "data_size": 63488 00:16:31.565 }, 00:16:31.565 { 00:16:31.565 "name": "pt3", 00:16:31.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.565 "is_configured": true, 00:16:31.565 "data_offset": 2048, 00:16:31.565 "data_size": 63488 00:16:31.565 }, 00:16:31.565 { 00:16:31.565 "name": "pt4", 00:16:31.565 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.565 "is_configured": true, 00:16:31.565 "data_offset": 2048, 00:16:31.565 "data_size": 63488 00:16:31.565 } 00:16:31.565 ] 00:16:31.565 } 00:16:31.565 } 00:16:31.565 }' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:31.565 pt2 00:16:31.565 pt3 00:16:31.565 pt4' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.565 10:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.565 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:31.824 [2024-10-30 10:44:53.096182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce ']' 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 [2024-10-30 10:44:53.147833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.824 [2024-10-30 10:44:53.148053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.824 [2024-10-30 10:44:53.148251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.824 [2024-10-30 10:44:53.148493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.824 [2024-10-30 10:44:53.148676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.824 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.083 [2024-10-30 10:44:53.303958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:32.083 [2024-10-30 10:44:53.306601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:32.083 [2024-10-30 10:44:53.306674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:32.083 [2024-10-30 10:44:53.306732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:32.083 [2024-10-30 10:44:53.306809] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:32.083 [2024-10-30 10:44:53.306888] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:32.083 [2024-10-30 10:44:53.306930] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:32.083 [2024-10-30 10:44:53.306961] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:32.083 [2024-10-30 10:44:53.307001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.083 [2024-10-30 10:44:53.307020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:32.083 request: 00:16:32.083 { 00:16:32.083 "name": "raid_bdev1", 00:16:32.083 "raid_level": "raid1", 00:16:32.083 "base_bdevs": [ 00:16:32.083 "malloc1", 00:16:32.083 "malloc2", 00:16:32.083 "malloc3", 00:16:32.083 "malloc4" 00:16:32.083 ], 00:16:32.083 "superblock": false, 00:16:32.083 "method": "bdev_raid_create", 00:16:32.083 "req_id": 1 00:16:32.083 } 00:16:32.083 Got JSON-RPC error response 00:16:32.083 response: 00:16:32.083 { 00:16:32.083 "code": -17, 00:16:32.083 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:32.083 } 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.083 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.084 [2024-10-30 10:44:53.367925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.084 [2024-10-30 10:44:53.368169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.084 [2024-10-30 10:44:53.368239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:32.084 [2024-10-30 10:44:53.368353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.084 [2024-10-30 10:44:53.371464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.084 [2024-10-30 10:44:53.371675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.084 [2024-10-30 10:44:53.371907] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:32.084 [2024-10-30 10:44:53.372148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.084 pt1 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.084 "name": "raid_bdev1", 00:16:32.084 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:32.084 "strip_size_kb": 0, 00:16:32.084 "state": "configuring", 00:16:32.084 "raid_level": "raid1", 00:16:32.084 "superblock": true, 00:16:32.084 "num_base_bdevs": 4, 00:16:32.084 "num_base_bdevs_discovered": 1, 00:16:32.084 "num_base_bdevs_operational": 4, 00:16:32.084 "base_bdevs_list": [ 00:16:32.084 { 00:16:32.084 "name": "pt1", 00:16:32.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.084 "is_configured": true, 00:16:32.084 "data_offset": 2048, 00:16:32.084 "data_size": 63488 00:16:32.084 }, 00:16:32.084 { 00:16:32.084 "name": null, 00:16:32.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.084 "is_configured": false, 00:16:32.084 "data_offset": 2048, 00:16:32.084 "data_size": 63488 00:16:32.084 }, 00:16:32.084 { 00:16:32.084 "name": null, 00:16:32.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.084 "is_configured": false, 00:16:32.084 "data_offset": 2048, 00:16:32.084 "data_size": 63488 00:16:32.084 }, 00:16:32.084 { 00:16:32.084 "name": null, 00:16:32.084 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.084 "is_configured": false, 00:16:32.084 "data_offset": 2048, 00:16:32.084 "data_size": 63488 00:16:32.084 } 00:16:32.084 ] 00:16:32.084 }' 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.084 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.651 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:32.651 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.651 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.651 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.652 [2024-10-30 10:44:53.916212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.652 [2024-10-30 10:44:53.916436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.652 [2024-10-30 10:44:53.916475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:32.652 [2024-10-30 10:44:53.916494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.652 [2024-10-30 10:44:53.917088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.652 [2024-10-30 10:44:53.917124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.652 [2024-10-30 10:44:53.917228] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:32.652 [2024-10-30 10:44:53.917272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.652 pt2 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.652 [2024-10-30 10:44:53.924191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.652 "name": "raid_bdev1", 00:16:32.652 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:32.652 "strip_size_kb": 0, 00:16:32.652 "state": "configuring", 00:16:32.652 "raid_level": "raid1", 00:16:32.652 "superblock": true, 00:16:32.652 "num_base_bdevs": 4, 00:16:32.652 "num_base_bdevs_discovered": 1, 00:16:32.652 "num_base_bdevs_operational": 4, 00:16:32.652 "base_bdevs_list": [ 00:16:32.652 { 00:16:32.652 "name": "pt1", 00:16:32.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.652 "is_configured": true, 00:16:32.652 "data_offset": 2048, 00:16:32.652 "data_size": 63488 00:16:32.652 }, 00:16:32.652 { 00:16:32.652 "name": null, 00:16:32.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.652 "is_configured": false, 00:16:32.652 "data_offset": 0, 00:16:32.652 "data_size": 63488 00:16:32.652 }, 00:16:32.652 { 00:16:32.652 "name": null, 00:16:32.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.652 "is_configured": false, 00:16:32.652 "data_offset": 2048, 00:16:32.652 "data_size": 63488 00:16:32.652 }, 00:16:32.652 { 00:16:32.652 "name": null, 00:16:32.652 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.652 "is_configured": false, 00:16:32.652 "data_offset": 2048, 00:16:32.652 "data_size": 63488 00:16:32.652 } 00:16:32.652 ] 00:16:32.652 }' 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.652 10:44:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.219 [2024-10-30 10:44:54.464337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.219 [2024-10-30 10:44:54.464456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.219 [2024-10-30 10:44:54.464493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:33.219 [2024-10-30 10:44:54.464509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.219 [2024-10-30 10:44:54.465087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.219 [2024-10-30 10:44:54.465124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.219 [2024-10-30 10:44:54.465229] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:33.219 [2024-10-30 10:44:54.465261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.219 pt2 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.219 [2024-10-30 10:44:54.476306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:33.219 [2024-10-30 10:44:54.476409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.219 [2024-10-30 10:44:54.476435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:33.219 [2024-10-30 10:44:54.476447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.219 [2024-10-30 10:44:54.476856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.219 [2024-10-30 10:44:54.476887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:33.219 [2024-10-30 10:44:54.476963] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:33.219 [2024-10-30 10:44:54.477039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:33.219 pt3 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.219 [2024-10-30 10:44:54.484277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:33.219 [2024-10-30 10:44:54.484357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.219 [2024-10-30 10:44:54.484382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:33.219 [2024-10-30 10:44:54.484395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.219 [2024-10-30 10:44:54.484850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.219 [2024-10-30 10:44:54.484883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:33.219 [2024-10-30 10:44:54.484961] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:33.219 [2024-10-30 10:44:54.485006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:33.219 [2024-10-30 10:44:54.485188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:33.219 [2024-10-30 10:44:54.485204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:33.219 [2024-10-30 10:44:54.485542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:33.219 [2024-10-30 10:44:54.485745] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:33.219 [2024-10-30 10:44:54.485765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:33.219 [2024-10-30 10:44:54.485946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.219 pt4 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.219 "name": "raid_bdev1", 00:16:33.219 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:33.219 "strip_size_kb": 0, 00:16:33.219 "state": "online", 00:16:33.219 "raid_level": "raid1", 00:16:33.219 "superblock": true, 00:16:33.219 "num_base_bdevs": 4, 00:16:33.219 "num_base_bdevs_discovered": 4, 00:16:33.219 "num_base_bdevs_operational": 4, 00:16:33.219 "base_bdevs_list": [ 00:16:33.219 { 00:16:33.219 "name": "pt1", 00:16:33.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.219 "is_configured": true, 00:16:33.219 "data_offset": 2048, 00:16:33.219 "data_size": 63488 00:16:33.219 }, 00:16:33.219 { 00:16:33.219 "name": "pt2", 00:16:33.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.219 "is_configured": true, 00:16:33.219 "data_offset": 2048, 00:16:33.219 "data_size": 63488 00:16:33.219 }, 00:16:33.219 { 00:16:33.219 "name": "pt3", 00:16:33.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.219 "is_configured": true, 00:16:33.219 "data_offset": 2048, 00:16:33.219 "data_size": 63488 00:16:33.219 }, 00:16:33.219 { 00:16:33.219 "name": "pt4", 00:16:33.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.219 "is_configured": true, 00:16:33.219 "data_offset": 2048, 00:16:33.219 "data_size": 63488 00:16:33.219 } 00:16:33.219 ] 00:16:33.219 }' 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.219 10:44:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.786 [2024-10-30 10:44:55.020897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.786 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.786 "name": "raid_bdev1", 00:16:33.786 "aliases": [ 00:16:33.786 "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce" 00:16:33.786 ], 00:16:33.786 "product_name": "Raid Volume", 00:16:33.786 "block_size": 512, 00:16:33.786 "num_blocks": 63488, 00:16:33.786 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:33.786 "assigned_rate_limits": { 00:16:33.786 "rw_ios_per_sec": 0, 00:16:33.786 "rw_mbytes_per_sec": 0, 00:16:33.786 "r_mbytes_per_sec": 0, 00:16:33.786 "w_mbytes_per_sec": 0 00:16:33.786 }, 00:16:33.786 "claimed": false, 00:16:33.786 "zoned": false, 00:16:33.786 "supported_io_types": { 00:16:33.786 "read": true, 00:16:33.786 "write": true, 00:16:33.786 "unmap": false, 00:16:33.786 "flush": false, 00:16:33.786 "reset": true, 00:16:33.786 "nvme_admin": false, 00:16:33.786 "nvme_io": false, 00:16:33.786 "nvme_io_md": false, 00:16:33.786 "write_zeroes": true, 00:16:33.786 "zcopy": false, 00:16:33.786 "get_zone_info": false, 00:16:33.786 "zone_management": false, 00:16:33.786 "zone_append": false, 00:16:33.786 "compare": false, 00:16:33.786 "compare_and_write": false, 00:16:33.786 "abort": false, 00:16:33.786 "seek_hole": false, 00:16:33.786 "seek_data": false, 00:16:33.786 "copy": false, 00:16:33.786 "nvme_iov_md": false 00:16:33.786 }, 00:16:33.786 "memory_domains": [ 00:16:33.786 { 00:16:33.786 "dma_device_id": "system", 00:16:33.786 "dma_device_type": 1 00:16:33.786 }, 00:16:33.786 { 00:16:33.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.786 "dma_device_type": 2 00:16:33.786 }, 00:16:33.786 { 00:16:33.786 "dma_device_id": "system", 00:16:33.786 "dma_device_type": 1 00:16:33.786 }, 00:16:33.786 { 00:16:33.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.786 "dma_device_type": 2 00:16:33.786 }, 00:16:33.786 { 00:16:33.786 "dma_device_id": "system", 00:16:33.786 "dma_device_type": 1 00:16:33.786 }, 00:16:33.786 { 00:16:33.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.786 "dma_device_type": 2 00:16:33.786 }, 00:16:33.786 { 00:16:33.786 "dma_device_id": "system", 00:16:33.786 "dma_device_type": 1 00:16:33.786 }, 00:16:33.786 { 00:16:33.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.787 "dma_device_type": 2 00:16:33.787 } 00:16:33.787 ], 00:16:33.787 "driver_specific": { 00:16:33.787 "raid": { 00:16:33.787 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:33.787 "strip_size_kb": 0, 00:16:33.787 "state": "online", 00:16:33.787 "raid_level": "raid1", 00:16:33.787 "superblock": true, 00:16:33.787 "num_base_bdevs": 4, 00:16:33.787 "num_base_bdevs_discovered": 4, 00:16:33.787 "num_base_bdevs_operational": 4, 00:16:33.787 "base_bdevs_list": [ 00:16:33.787 { 00:16:33.787 "name": "pt1", 00:16:33.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.787 "is_configured": true, 00:16:33.787 "data_offset": 2048, 00:16:33.787 "data_size": 63488 00:16:33.787 }, 00:16:33.787 { 00:16:33.787 "name": "pt2", 00:16:33.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.787 "is_configured": true, 00:16:33.787 "data_offset": 2048, 00:16:33.787 "data_size": 63488 00:16:33.787 }, 00:16:33.787 { 00:16:33.787 "name": "pt3", 00:16:33.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.787 "is_configured": true, 00:16:33.787 "data_offset": 2048, 00:16:33.787 "data_size": 63488 00:16:33.787 }, 00:16:33.787 { 00:16:33.787 "name": "pt4", 00:16:33.787 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.787 "is_configured": true, 00:16:33.787 "data_offset": 2048, 00:16:33.787 "data_size": 63488 00:16:33.787 } 00:16:33.787 ] 00:16:33.787 } 00:16:33.787 } 00:16:33.787 }' 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:33.787 pt2 00:16:33.787 pt3 00:16:33.787 pt4' 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.787 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.046 [2024-10-30 10:44:55.392884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce '!=' 9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce ']' 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.046 [2024-10-30 10:44:55.436636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.046 "name": "raid_bdev1", 00:16:34.046 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:34.046 "strip_size_kb": 0, 00:16:34.046 "state": "online", 00:16:34.046 "raid_level": "raid1", 00:16:34.046 "superblock": true, 00:16:34.046 "num_base_bdevs": 4, 00:16:34.046 "num_base_bdevs_discovered": 3, 00:16:34.046 "num_base_bdevs_operational": 3, 00:16:34.046 "base_bdevs_list": [ 00:16:34.046 { 00:16:34.046 "name": null, 00:16:34.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.046 "is_configured": false, 00:16:34.046 "data_offset": 0, 00:16:34.046 "data_size": 63488 00:16:34.046 }, 00:16:34.046 { 00:16:34.046 "name": "pt2", 00:16:34.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.046 "is_configured": true, 00:16:34.046 "data_offset": 2048, 00:16:34.046 "data_size": 63488 00:16:34.046 }, 00:16:34.046 { 00:16:34.046 "name": "pt3", 00:16:34.046 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.046 "is_configured": true, 00:16:34.046 "data_offset": 2048, 00:16:34.046 "data_size": 63488 00:16:34.046 }, 00:16:34.046 { 00:16:34.046 "name": "pt4", 00:16:34.046 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:34.046 "is_configured": true, 00:16:34.046 "data_offset": 2048, 00:16:34.046 "data_size": 63488 00:16:34.046 } 00:16:34.046 ] 00:16:34.046 }' 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.046 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.614 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:34.614 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.614 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.614 [2024-10-30 10:44:55.964778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.614 [2024-10-30 10:44:55.964816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:34.614 [2024-10-30 10:44:55.964925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.614 [2024-10-30 10:44:55.965070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.614 [2024-10-30 10:44:55.965089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:34.614 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.614 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.614 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.614 10:44:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:34.614 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.614 10:44:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.614 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.615 [2024-10-30 10:44:56.052758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:34.615 [2024-10-30 10:44:56.053002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.615 [2024-10-30 10:44:56.053045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:34.615 [2024-10-30 10:44:56.053061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.615 [2024-10-30 10:44:56.056026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.615 [2024-10-30 10:44:56.056218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:34.615 [2024-10-30 10:44:56.056344] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:34.615 [2024-10-30 10:44:56.056407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.615 pt2 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.615 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.874 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.874 "name": "raid_bdev1", 00:16:34.874 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:34.874 "strip_size_kb": 0, 00:16:34.874 "state": "configuring", 00:16:34.874 "raid_level": "raid1", 00:16:34.874 "superblock": true, 00:16:34.874 "num_base_bdevs": 4, 00:16:34.874 "num_base_bdevs_discovered": 1, 00:16:34.874 "num_base_bdevs_operational": 3, 00:16:34.874 "base_bdevs_list": [ 00:16:34.874 { 00:16:34.874 "name": null, 00:16:34.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.874 "is_configured": false, 00:16:34.874 "data_offset": 2048, 00:16:34.874 "data_size": 63488 00:16:34.874 }, 00:16:34.874 { 00:16:34.874 "name": "pt2", 00:16:34.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.874 "is_configured": true, 00:16:34.874 "data_offset": 2048, 00:16:34.874 "data_size": 63488 00:16:34.874 }, 00:16:34.874 { 00:16:34.874 "name": null, 00:16:34.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.874 "is_configured": false, 00:16:34.874 "data_offset": 2048, 00:16:34.874 "data_size": 63488 00:16:34.874 }, 00:16:34.874 { 00:16:34.874 "name": null, 00:16:34.874 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:34.874 "is_configured": false, 00:16:34.874 "data_offset": 2048, 00:16:34.874 "data_size": 63488 00:16:34.874 } 00:16:34.874 ] 00:16:34.874 }' 00:16:34.874 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.874 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.133 [2024-10-30 10:44:56.584972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:35.133 [2024-10-30 10:44:56.585193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.133 [2024-10-30 10:44:56.585275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:35.133 [2024-10-30 10:44:56.585439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.133 [2024-10-30 10:44:56.586029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.133 [2024-10-30 10:44:56.586062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:35.133 [2024-10-30 10:44:56.586173] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:35.133 [2024-10-30 10:44:56.586205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:35.133 pt3 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.133 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.392 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.392 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.392 "name": "raid_bdev1", 00:16:35.392 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:35.392 "strip_size_kb": 0, 00:16:35.392 "state": "configuring", 00:16:35.392 "raid_level": "raid1", 00:16:35.392 "superblock": true, 00:16:35.392 "num_base_bdevs": 4, 00:16:35.392 "num_base_bdevs_discovered": 2, 00:16:35.392 "num_base_bdevs_operational": 3, 00:16:35.392 "base_bdevs_list": [ 00:16:35.392 { 00:16:35.392 "name": null, 00:16:35.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.392 "is_configured": false, 00:16:35.392 "data_offset": 2048, 00:16:35.392 "data_size": 63488 00:16:35.392 }, 00:16:35.392 { 00:16:35.392 "name": "pt2", 00:16:35.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.392 "is_configured": true, 00:16:35.392 "data_offset": 2048, 00:16:35.392 "data_size": 63488 00:16:35.392 }, 00:16:35.392 { 00:16:35.392 "name": "pt3", 00:16:35.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.392 "is_configured": true, 00:16:35.392 "data_offset": 2048, 00:16:35.392 "data_size": 63488 00:16:35.392 }, 00:16:35.392 { 00:16:35.392 "name": null, 00:16:35.392 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:35.392 "is_configured": false, 00:16:35.392 "data_offset": 2048, 00:16:35.392 "data_size": 63488 00:16:35.392 } 00:16:35.392 ] 00:16:35.392 }' 00:16:35.392 10:44:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.392 10:44:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.651 [2024-10-30 10:44:57.109111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:35.651 [2024-10-30 10:44:57.109188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.651 [2024-10-30 10:44:57.109221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:35.651 [2024-10-30 10:44:57.109236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.651 [2024-10-30 10:44:57.109797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.651 [2024-10-30 10:44:57.109829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:35.651 [2024-10-30 10:44:57.109966] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:35.651 [2024-10-30 10:44:57.110028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:35.651 [2024-10-30 10:44:57.110217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:35.651 [2024-10-30 10:44:57.110239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:35.651 [2024-10-30 10:44:57.110554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:35.651 [2024-10-30 10:44:57.110757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:35.651 [2024-10-30 10:44:57.110778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:35.651 [2024-10-30 10:44:57.110961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.651 pt4 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.651 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.910 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.910 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.910 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.910 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.910 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.910 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.910 "name": "raid_bdev1", 00:16:35.910 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:35.910 "strip_size_kb": 0, 00:16:35.910 "state": "online", 00:16:35.910 "raid_level": "raid1", 00:16:35.910 "superblock": true, 00:16:35.910 "num_base_bdevs": 4, 00:16:35.910 "num_base_bdevs_discovered": 3, 00:16:35.910 "num_base_bdevs_operational": 3, 00:16:35.910 "base_bdevs_list": [ 00:16:35.910 { 00:16:35.910 "name": null, 00:16:35.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.910 "is_configured": false, 00:16:35.910 "data_offset": 2048, 00:16:35.910 "data_size": 63488 00:16:35.910 }, 00:16:35.910 { 00:16:35.910 "name": "pt2", 00:16:35.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.910 "is_configured": true, 00:16:35.910 "data_offset": 2048, 00:16:35.910 "data_size": 63488 00:16:35.910 }, 00:16:35.910 { 00:16:35.910 "name": "pt3", 00:16:35.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.910 "is_configured": true, 00:16:35.910 "data_offset": 2048, 00:16:35.910 "data_size": 63488 00:16:35.910 }, 00:16:35.910 { 00:16:35.910 "name": "pt4", 00:16:35.910 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:35.910 "is_configured": true, 00:16:35.910 "data_offset": 2048, 00:16:35.910 "data_size": 63488 00:16:35.910 } 00:16:35.910 ] 00:16:35.910 }' 00:16:35.910 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.910 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.169 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:36.169 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.169 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.169 [2024-10-30 10:44:57.637164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.169 [2024-10-30 10:44:57.637199] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.170 [2024-10-30 10:44:57.637296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.170 [2024-10-30 10:44:57.637416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.170 [2024-10-30 10:44:57.637436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.429 [2024-10-30 10:44:57.713181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:36.429 [2024-10-30 10:44:57.713264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.429 [2024-10-30 10:44:57.713291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:36.429 [2024-10-30 10:44:57.713308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.429 [2024-10-30 10:44:57.716275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.429 [2024-10-30 10:44:57.716329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:36.429 [2024-10-30 10:44:57.716441] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:36.429 [2024-10-30 10:44:57.716507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:36.429 [2024-10-30 10:44:57.716671] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:36.429 [2024-10-30 10:44:57.716695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.429 [2024-10-30 10:44:57.716723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:36.429 [2024-10-30 10:44:57.716818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.429 [2024-10-30 10:44:57.717017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:36.429 pt1 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.429 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.429 "name": "raid_bdev1", 00:16:36.429 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:36.429 "strip_size_kb": 0, 00:16:36.429 "state": "configuring", 00:16:36.429 "raid_level": "raid1", 00:16:36.429 "superblock": true, 00:16:36.429 "num_base_bdevs": 4, 00:16:36.429 "num_base_bdevs_discovered": 2, 00:16:36.429 "num_base_bdevs_operational": 3, 00:16:36.429 "base_bdevs_list": [ 00:16:36.429 { 00:16:36.429 "name": null, 00:16:36.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.430 "is_configured": false, 00:16:36.430 "data_offset": 2048, 00:16:36.430 "data_size": 63488 00:16:36.430 }, 00:16:36.430 { 00:16:36.430 "name": "pt2", 00:16:36.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.430 "is_configured": true, 00:16:36.430 "data_offset": 2048, 00:16:36.430 "data_size": 63488 00:16:36.430 }, 00:16:36.430 { 00:16:36.430 "name": "pt3", 00:16:36.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.430 "is_configured": true, 00:16:36.430 "data_offset": 2048, 00:16:36.430 "data_size": 63488 00:16:36.430 }, 00:16:36.430 { 00:16:36.430 "name": null, 00:16:36.430 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:36.430 "is_configured": false, 00:16:36.430 "data_offset": 2048, 00:16:36.430 "data_size": 63488 00:16:36.430 } 00:16:36.430 ] 00:16:36.430 }' 00:16:36.430 10:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.430 10:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.997 [2024-10-30 10:44:58.273412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:36.997 [2024-10-30 10:44:58.273501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.997 [2024-10-30 10:44:58.273534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:36.997 [2024-10-30 10:44:58.273549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.997 [2024-10-30 10:44:58.274126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.997 [2024-10-30 10:44:58.274157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:36.997 [2024-10-30 10:44:58.274266] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:36.997 [2024-10-30 10:44:58.274306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:36.997 [2024-10-30 10:44:58.274508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:36.997 [2024-10-30 10:44:58.274524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:36.997 [2024-10-30 10:44:58.274839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:36.997 [2024-10-30 10:44:58.275055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:36.997 [2024-10-30 10:44:58.275077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:36.997 [2024-10-30 10:44:58.275266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.997 pt4 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.997 "name": "raid_bdev1", 00:16:36.997 "uuid": "9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce", 00:16:36.997 "strip_size_kb": 0, 00:16:36.997 "state": "online", 00:16:36.997 "raid_level": "raid1", 00:16:36.997 "superblock": true, 00:16:36.997 "num_base_bdevs": 4, 00:16:36.997 "num_base_bdevs_discovered": 3, 00:16:36.997 "num_base_bdevs_operational": 3, 00:16:36.997 "base_bdevs_list": [ 00:16:36.997 { 00:16:36.997 "name": null, 00:16:36.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.997 "is_configured": false, 00:16:36.997 "data_offset": 2048, 00:16:36.997 "data_size": 63488 00:16:36.997 }, 00:16:36.997 { 00:16:36.997 "name": "pt2", 00:16:36.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.997 "is_configured": true, 00:16:36.997 "data_offset": 2048, 00:16:36.997 "data_size": 63488 00:16:36.997 }, 00:16:36.997 { 00:16:36.997 "name": "pt3", 00:16:36.997 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.997 "is_configured": true, 00:16:36.997 "data_offset": 2048, 00:16:36.997 "data_size": 63488 00:16:36.997 }, 00:16:36.997 { 00:16:36.997 "name": "pt4", 00:16:36.997 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:36.997 "is_configured": true, 00:16:36.997 "data_offset": 2048, 00:16:36.997 "data_size": 63488 00:16:36.997 } 00:16:36.997 ] 00:16:36.997 }' 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.997 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:37.566 [2024-10-30 10:44:58.825878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce '!=' 9c650a37-8ff4-42cc-8e2e-a89eb4ef25ce ']' 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74854 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74854 ']' 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74854 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74854 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:37.566 killing process with pid 74854 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74854' 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74854 00:16:37.566 [2024-10-30 10:44:58.901487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.566 10:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74854 00:16:37.566 [2024-10-30 10:44:58.901613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.566 [2024-10-30 10:44:58.901709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.566 [2024-10-30 10:44:58.901729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:37.824 [2024-10-30 10:44:59.249575] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:39.226 10:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:39.226 00:16:39.226 real 0m9.423s 00:16:39.226 user 0m15.574s 00:16:39.226 sys 0m1.287s 00:16:39.226 10:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:39.226 ************************************ 00:16:39.226 10:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.226 END TEST raid_superblock_test 00:16:39.226 ************************************ 00:16:39.226 10:45:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:39.226 10:45:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:39.226 10:45:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:39.226 10:45:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:39.226 ************************************ 00:16:39.226 START TEST raid_read_error_test 00:16:39.226 ************************************ 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.C4QeBkgAfT 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75352 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75352 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75352 ']' 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:39.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:39.226 10:45:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.226 [2024-10-30 10:45:00.436350] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:16:39.226 [2024-10-30 10:45:00.436582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75352 ] 00:16:39.226 [2024-10-30 10:45:00.620620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.486 [2024-10-30 10:45:00.782039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.744 [2024-10-30 10:45:01.012803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.744 [2024-10-30 10:45:01.012895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.003 BaseBdev1_malloc 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.003 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 true 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 [2024-10-30 10:45:01.482905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:40.263 [2024-10-30 10:45:01.483171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.263 [2024-10-30 10:45:01.483212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:40.263 [2024-10-30 10:45:01.483231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.263 [2024-10-30 10:45:01.486200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.263 BaseBdev1 00:16:40.263 [2024-10-30 10:45:01.486372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 BaseBdev2_malloc 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 true 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 [2024-10-30 10:45:01.540127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:40.263 [2024-10-30 10:45:01.540333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.263 [2024-10-30 10:45:01.540401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:40.263 [2024-10-30 10:45:01.540522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.263 [2024-10-30 10:45:01.543471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.263 [2024-10-30 10:45:01.543553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:40.263 BaseBdev2 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 BaseBdev3_malloc 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 true 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 [2024-10-30 10:45:01.615035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:40.263 [2024-10-30 10:45:01.615247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.263 [2024-10-30 10:45:01.615284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:40.263 [2024-10-30 10:45:01.615303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.263 [2024-10-30 10:45:01.618152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.263 [2024-10-30 10:45:01.618200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:40.263 BaseBdev3 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 BaseBdev4_malloc 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 true 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 [2024-10-30 10:45:01.675615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:40.263 [2024-10-30 10:45:01.675843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.263 [2024-10-30 10:45:01.675880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:40.263 [2024-10-30 10:45:01.675899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.263 [2024-10-30 10:45:01.678697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.263 [2024-10-30 10:45:01.678762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:40.263 BaseBdev4 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 [2024-10-30 10:45:01.683731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.263 [2024-10-30 10:45:01.686364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.263 [2024-10-30 10:45:01.686642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.263 [2024-10-30 10:45:01.686762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:40.263 [2024-10-30 10:45:01.687159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:40.263 [2024-10-30 10:45:01.687185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:40.263 [2024-10-30 10:45:01.687533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:40.263 [2024-10-30 10:45:01.687775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:40.263 [2024-10-30 10:45:01.687791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:40.263 [2024-10-30 10:45:01.688050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.263 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.522 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.522 "name": "raid_bdev1", 00:16:40.522 "uuid": "75ffce3a-3dfb-4754-a3d9-82b68ed13406", 00:16:40.522 "strip_size_kb": 0, 00:16:40.522 "state": "online", 00:16:40.522 "raid_level": "raid1", 00:16:40.522 "superblock": true, 00:16:40.522 "num_base_bdevs": 4, 00:16:40.522 "num_base_bdevs_discovered": 4, 00:16:40.522 "num_base_bdevs_operational": 4, 00:16:40.522 "base_bdevs_list": [ 00:16:40.522 { 00:16:40.522 "name": "BaseBdev1", 00:16:40.522 "uuid": "0171208e-082c-5833-b560-fca9fc4d17f5", 00:16:40.522 "is_configured": true, 00:16:40.522 "data_offset": 2048, 00:16:40.523 "data_size": 63488 00:16:40.523 }, 00:16:40.523 { 00:16:40.523 "name": "BaseBdev2", 00:16:40.523 "uuid": "186f9a80-4919-587b-916e-13c632f4670f", 00:16:40.523 "is_configured": true, 00:16:40.523 "data_offset": 2048, 00:16:40.523 "data_size": 63488 00:16:40.523 }, 00:16:40.523 { 00:16:40.523 "name": "BaseBdev3", 00:16:40.523 "uuid": "2022d56a-929a-5c75-b9aa-0c0dad333361", 00:16:40.523 "is_configured": true, 00:16:40.523 "data_offset": 2048, 00:16:40.523 "data_size": 63488 00:16:40.523 }, 00:16:40.523 { 00:16:40.523 "name": "BaseBdev4", 00:16:40.523 "uuid": "ecf9d34d-f8a6-5ec6-b1d3-0f9caa8c7b66", 00:16:40.523 "is_configured": true, 00:16:40.523 "data_offset": 2048, 00:16:40.523 "data_size": 63488 00:16:40.523 } 00:16:40.523 ] 00:16:40.523 }' 00:16:40.523 10:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.523 10:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.782 10:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:40.782 10:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:41.040 [2024-10-30 10:45:02.361644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.974 "name": "raid_bdev1", 00:16:41.974 "uuid": "75ffce3a-3dfb-4754-a3d9-82b68ed13406", 00:16:41.974 "strip_size_kb": 0, 00:16:41.974 "state": "online", 00:16:41.974 "raid_level": "raid1", 00:16:41.974 "superblock": true, 00:16:41.974 "num_base_bdevs": 4, 00:16:41.974 "num_base_bdevs_discovered": 4, 00:16:41.974 "num_base_bdevs_operational": 4, 00:16:41.974 "base_bdevs_list": [ 00:16:41.974 { 00:16:41.974 "name": "BaseBdev1", 00:16:41.974 "uuid": "0171208e-082c-5833-b560-fca9fc4d17f5", 00:16:41.974 "is_configured": true, 00:16:41.974 "data_offset": 2048, 00:16:41.974 "data_size": 63488 00:16:41.974 }, 00:16:41.974 { 00:16:41.974 "name": "BaseBdev2", 00:16:41.974 "uuid": "186f9a80-4919-587b-916e-13c632f4670f", 00:16:41.974 "is_configured": true, 00:16:41.974 "data_offset": 2048, 00:16:41.974 "data_size": 63488 00:16:41.974 }, 00:16:41.974 { 00:16:41.974 "name": "BaseBdev3", 00:16:41.974 "uuid": "2022d56a-929a-5c75-b9aa-0c0dad333361", 00:16:41.974 "is_configured": true, 00:16:41.974 "data_offset": 2048, 00:16:41.974 "data_size": 63488 00:16:41.974 }, 00:16:41.974 { 00:16:41.974 "name": "BaseBdev4", 00:16:41.974 "uuid": "ecf9d34d-f8a6-5ec6-b1d3-0f9caa8c7b66", 00:16:41.974 "is_configured": true, 00:16:41.974 "data_offset": 2048, 00:16:41.974 "data_size": 63488 00:16:41.974 } 00:16:41.974 ] 00:16:41.974 }' 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.974 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.540 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.540 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.540 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.540 [2024-10-30 10:45:03.764257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.540 [2024-10-30 10:45:03.764297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.540 { 00:16:42.540 "results": [ 00:16:42.540 { 00:16:42.540 "job": "raid_bdev1", 00:16:42.540 "core_mask": "0x1", 00:16:42.540 "workload": "randrw", 00:16:42.540 "percentage": 50, 00:16:42.540 "status": "finished", 00:16:42.540 "queue_depth": 1, 00:16:42.540 "io_size": 131072, 00:16:42.540 "runtime": 1.400007, 00:16:42.540 "iops": 7623.533310904874, 00:16:42.540 "mibps": 952.9416638631093, 00:16:42.540 "io_failed": 0, 00:16:42.540 "io_timeout": 0, 00:16:42.540 "avg_latency_us": 127.04176894968614, 00:16:42.540 "min_latency_us": 40.261818181818185, 00:16:42.540 "max_latency_us": 2010.7636363636364 00:16:42.540 } 00:16:42.540 ], 00:16:42.540 "core_count": 1 00:16:42.540 } 00:16:42.540 [2024-10-30 10:45:03.768051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.540 [2024-10-30 10:45:03.768129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.540 [2024-10-30 10:45:03.768346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.540 [2024-10-30 10:45:03.768385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:42.540 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.540 10:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75352 00:16:42.540 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75352 ']' 00:16:42.540 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75352 00:16:42.540 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:16:42.540 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:42.541 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75352 00:16:42.541 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:42.541 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:42.541 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75352' 00:16:42.541 killing process with pid 75352 00:16:42.541 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75352 00:16:42.541 [2024-10-30 10:45:03.805474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.541 10:45:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75352 00:16:42.798 [2024-10-30 10:45:04.084463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.C4QeBkgAfT 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:43.732 00:16:43.732 real 0m4.869s 00:16:43.732 user 0m6.011s 00:16:43.732 sys 0m0.613s 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:43.732 10:45:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.732 ************************************ 00:16:43.732 END TEST raid_read_error_test 00:16:43.732 ************************************ 00:16:43.991 10:45:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:16:43.991 10:45:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:43.991 10:45:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:43.991 10:45:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 ************************************ 00:16:43.991 START TEST raid_write_error_test 00:16:43.991 ************************************ 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wvIQY1gpnl 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75498 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75498 00:16:43.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75498 ']' 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:43.991 10:45:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.991 [2024-10-30 10:45:05.378658] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:16:43.991 [2024-10-30 10:45:05.378870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75498 ] 00:16:44.250 [2024-10-30 10:45:05.568705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.250 [2024-10-30 10:45:05.697666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.508 [2024-10-30 10:45:05.906315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.508 [2024-10-30 10:45:05.906405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.076 BaseBdev1_malloc 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.076 true 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.076 [2024-10-30 10:45:06.382781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:45.076 [2024-10-30 10:45:06.383007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.076 [2024-10-30 10:45:06.383049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:45.076 [2024-10-30 10:45:06.383068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.076 [2024-10-30 10:45:06.385923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.076 BaseBdev1 00:16:45.076 [2024-10-30 10:45:06.386144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.076 BaseBdev2_malloc 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.076 true 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.076 [2024-10-30 10:45:06.438989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:45.076 [2024-10-30 10:45:06.439064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.076 [2024-10-30 10:45:06.439094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:45.076 [2024-10-30 10:45:06.439125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.076 [2024-10-30 10:45:06.441897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.076 [2024-10-30 10:45:06.441952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:45.076 BaseBdev2 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:45.076 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.077 BaseBdev3_malloc 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.077 true 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.077 [2024-10-30 10:45:06.509859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:45.077 [2024-10-30 10:45:06.509931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.077 [2024-10-30 10:45:06.509959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:45.077 [2024-10-30 10:45:06.509990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.077 [2024-10-30 10:45:06.512816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.077 [2024-10-30 10:45:06.512874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.077 BaseBdev3 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.077 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.336 BaseBdev4_malloc 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.336 true 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.336 [2024-10-30 10:45:06.569958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:45.336 [2024-10-30 10:45:06.570075] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.336 [2024-10-30 10:45:06.570102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:45.336 [2024-10-30 10:45:06.570121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.336 [2024-10-30 10:45:06.572888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.336 [2024-10-30 10:45:06.572945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:45.336 BaseBdev4 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.336 [2024-10-30 10:45:06.578063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.336 [2024-10-30 10:45:06.580671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.336 [2024-10-30 10:45:06.580922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.336 [2024-10-30 10:45:06.581192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:45.336 [2024-10-30 10:45:06.581513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:45.336 [2024-10-30 10:45:06.581537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:45.336 [2024-10-30 10:45:06.581837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:45.336 [2024-10-30 10:45:06.582100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:45.336 [2024-10-30 10:45:06.582117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:45.336 [2024-10-30 10:45:06.582355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.336 "name": "raid_bdev1", 00:16:45.336 "uuid": "00a7142c-53c2-4d47-bb23-dd217111ac3b", 00:16:45.336 "strip_size_kb": 0, 00:16:45.336 "state": "online", 00:16:45.336 "raid_level": "raid1", 00:16:45.336 "superblock": true, 00:16:45.336 "num_base_bdevs": 4, 00:16:45.336 "num_base_bdevs_discovered": 4, 00:16:45.336 "num_base_bdevs_operational": 4, 00:16:45.336 "base_bdevs_list": [ 00:16:45.336 { 00:16:45.336 "name": "BaseBdev1", 00:16:45.336 "uuid": "75c26d7e-50cd-5622-b596-9d81867678a2", 00:16:45.336 "is_configured": true, 00:16:45.336 "data_offset": 2048, 00:16:45.336 "data_size": 63488 00:16:45.336 }, 00:16:45.336 { 00:16:45.336 "name": "BaseBdev2", 00:16:45.336 "uuid": "129e6411-c1c3-5585-9e53-9f82c114a2e2", 00:16:45.336 "is_configured": true, 00:16:45.336 "data_offset": 2048, 00:16:45.336 "data_size": 63488 00:16:45.336 }, 00:16:45.336 { 00:16:45.336 "name": "BaseBdev3", 00:16:45.336 "uuid": "413e532a-ee64-5eda-9637-f8b6626d1ce2", 00:16:45.336 "is_configured": true, 00:16:45.336 "data_offset": 2048, 00:16:45.336 "data_size": 63488 00:16:45.336 }, 00:16:45.336 { 00:16:45.336 "name": "BaseBdev4", 00:16:45.336 "uuid": "19986bfb-7f4c-55bf-a25a-af6606927b37", 00:16:45.336 "is_configured": true, 00:16:45.336 "data_offset": 2048, 00:16:45.336 "data_size": 63488 00:16:45.336 } 00:16:45.336 ] 00:16:45.336 }' 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.336 10:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.903 10:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:45.903 10:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:45.903 [2024-10-30 10:45:07.256101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.868 [2024-10-30 10:45:08.138099] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:46.868 [2024-10-30 10:45:08.138174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.868 [2024-10-30 10:45:08.138449] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.868 "name": "raid_bdev1", 00:16:46.868 "uuid": "00a7142c-53c2-4d47-bb23-dd217111ac3b", 00:16:46.868 "strip_size_kb": 0, 00:16:46.868 "state": "online", 00:16:46.868 "raid_level": "raid1", 00:16:46.868 "superblock": true, 00:16:46.868 "num_base_bdevs": 4, 00:16:46.868 "num_base_bdevs_discovered": 3, 00:16:46.868 "num_base_bdevs_operational": 3, 00:16:46.868 "base_bdevs_list": [ 00:16:46.868 { 00:16:46.868 "name": null, 00:16:46.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.868 "is_configured": false, 00:16:46.868 "data_offset": 0, 00:16:46.868 "data_size": 63488 00:16:46.868 }, 00:16:46.868 { 00:16:46.868 "name": "BaseBdev2", 00:16:46.868 "uuid": "129e6411-c1c3-5585-9e53-9f82c114a2e2", 00:16:46.868 "is_configured": true, 00:16:46.868 "data_offset": 2048, 00:16:46.868 "data_size": 63488 00:16:46.868 }, 00:16:46.868 { 00:16:46.868 "name": "BaseBdev3", 00:16:46.868 "uuid": "413e532a-ee64-5eda-9637-f8b6626d1ce2", 00:16:46.868 "is_configured": true, 00:16:46.868 "data_offset": 2048, 00:16:46.868 "data_size": 63488 00:16:46.868 }, 00:16:46.868 { 00:16:46.868 "name": "BaseBdev4", 00:16:46.868 "uuid": "19986bfb-7f4c-55bf-a25a-af6606927b37", 00:16:46.868 "is_configured": true, 00:16:46.868 "data_offset": 2048, 00:16:46.868 "data_size": 63488 00:16:46.868 } 00:16:46.868 ] 00:16:46.868 }' 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.868 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.434 [2024-10-30 10:45:08.642535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.434 [2024-10-30 10:45:08.642715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.434 { 00:16:47.434 "results": [ 00:16:47.434 { 00:16:47.434 "job": "raid_bdev1", 00:16:47.434 "core_mask": "0x1", 00:16:47.434 "workload": "randrw", 00:16:47.434 "percentage": 50, 00:16:47.434 "status": "finished", 00:16:47.434 "queue_depth": 1, 00:16:47.434 "io_size": 131072, 00:16:47.434 "runtime": 1.383869, 00:16:47.434 "iops": 8484.907169681523, 00:16:47.434 "mibps": 1060.6133962101903, 00:16:47.434 "io_failed": 0, 00:16:47.434 "io_timeout": 0, 00:16:47.434 "avg_latency_us": 113.6190718632415, 00:16:47.434 "min_latency_us": 40.261818181818185, 00:16:47.434 "max_latency_us": 1787.3454545454545 00:16:47.434 } 00:16:47.434 ], 00:16:47.434 "core_count": 1 00:16:47.434 } 00:16:47.434 [2024-10-30 10:45:08.646106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.434 [2024-10-30 10:45:08.646164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.434 [2024-10-30 10:45:08.646304] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.434 [2024-10-30 10:45:08.646322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75498 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75498 ']' 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75498 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75498 00:16:47.434 killing process with pid 75498 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75498' 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75498 00:16:47.434 [2024-10-30 10:45:08.680901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.434 10:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75498 00:16:47.692 [2024-10-30 10:45:08.969123] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wvIQY1gpnl 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:48.628 00:16:48.628 real 0m4.809s 00:16:48.628 user 0m5.910s 00:16:48.628 sys 0m0.605s 00:16:48.628 ************************************ 00:16:48.628 END TEST raid_write_error_test 00:16:48.628 ************************************ 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:48.628 10:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.886 10:45:10 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:16:48.886 10:45:10 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:48.887 10:45:10 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:16:48.887 10:45:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:48.887 10:45:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:48.887 10:45:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.887 ************************************ 00:16:48.887 START TEST raid_rebuild_test 00:16:48.887 ************************************ 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75646 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75646 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75646 ']' 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:48.887 10:45:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.887 [2024-10-30 10:45:10.222508] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:16:48.887 [2024-10-30 10:45:10.222926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:48.887 Zero copy mechanism will not be used. 00:16:48.887 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75646 ] 00:16:49.144 [2024-10-30 10:45:10.410934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.144 [2024-10-30 10:45:10.568497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.402 [2024-10-30 10:45:10.783316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.402 [2024-10-30 10:45:10.783395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.968 BaseBdev1_malloc 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.968 [2024-10-30 10:45:11.258012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.968 [2024-10-30 10:45:11.258241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.968 [2024-10-30 10:45:11.258320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:49.968 [2024-10-30 10:45:11.258453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.968 [2024-10-30 10:45:11.261246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.968 [2024-10-30 10:45:11.261427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.968 BaseBdev1 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.968 BaseBdev2_malloc 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.968 [2024-10-30 10:45:11.313836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:49.968 [2024-10-30 10:45:11.314056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.968 [2024-10-30 10:45:11.314129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:49.968 [2024-10-30 10:45:11.314273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.968 [2024-10-30 10:45:11.317027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.968 [2024-10-30 10:45:11.317185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:49.968 BaseBdev2 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.968 spare_malloc 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.968 spare_delay 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.968 [2024-10-30 10:45:11.396525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:49.968 [2024-10-30 10:45:11.396602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.968 [2024-10-30 10:45:11.396632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:49.968 [2024-10-30 10:45:11.396652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.968 [2024-10-30 10:45:11.399460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.968 spare 00:16:49.968 [2024-10-30 10:45:11.399650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.968 [2024-10-30 10:45:11.404614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.968 [2024-10-30 10:45:11.407061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:49.968 [2024-10-30 10:45:11.407201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:49.968 [2024-10-30 10:45:11.407223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:49.968 [2024-10-30 10:45:11.407545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:49.968 [2024-10-30 10:45:11.407754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:49.968 [2024-10-30 10:45:11.407773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:49.968 [2024-10-30 10:45:11.407956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.968 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.969 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.969 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.969 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.969 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.969 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.969 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.969 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.969 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.228 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.228 "name": "raid_bdev1", 00:16:50.228 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:16:50.228 "strip_size_kb": 0, 00:16:50.228 "state": "online", 00:16:50.228 "raid_level": "raid1", 00:16:50.228 "superblock": false, 00:16:50.228 "num_base_bdevs": 2, 00:16:50.228 "num_base_bdevs_discovered": 2, 00:16:50.228 "num_base_bdevs_operational": 2, 00:16:50.228 "base_bdevs_list": [ 00:16:50.228 { 00:16:50.228 "name": "BaseBdev1", 00:16:50.228 "uuid": "42953d17-56a5-5103-91f3-c1be84473690", 00:16:50.228 "is_configured": true, 00:16:50.228 "data_offset": 0, 00:16:50.228 "data_size": 65536 00:16:50.228 }, 00:16:50.228 { 00:16:50.228 "name": "BaseBdev2", 00:16:50.228 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:16:50.228 "is_configured": true, 00:16:50.228 "data_offset": 0, 00:16:50.228 "data_size": 65536 00:16:50.228 } 00:16:50.228 ] 00:16:50.228 }' 00:16:50.228 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.228 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.486 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:50.487 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.487 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.487 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.487 [2024-10-30 10:45:11.921121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.487 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.746 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:50.746 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.746 10:45:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:50.746 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.746 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.746 10:45:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:50.746 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:51.004 [2024-10-30 10:45:12.300943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:51.004 /dev/nbd0 00:16:51.004 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:51.004 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:51.004 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:51.004 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:51.004 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.005 1+0 records in 00:16:51.005 1+0 records out 00:16:51.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00074894 s, 5.5 MB/s 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:51.005 10:45:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:57.631 65536+0 records in 00:16:57.631 65536+0 records out 00:16:57.631 33554432 bytes (34 MB, 32 MiB) copied, 6.3841 s, 5.3 MB/s 00:16:57.631 10:45:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:57.631 10:45:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.631 10:45:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:57.631 10:45:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:57.631 10:45:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:57.631 10:45:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:57.631 10:45:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:57.631 [2024-10-30 10:45:19.041353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.631 [2024-10-30 10:45:19.069415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.631 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.890 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.890 "name": "raid_bdev1", 00:16:57.890 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:16:57.890 "strip_size_kb": 0, 00:16:57.890 "state": "online", 00:16:57.890 "raid_level": "raid1", 00:16:57.890 "superblock": false, 00:16:57.890 "num_base_bdevs": 2, 00:16:57.890 "num_base_bdevs_discovered": 1, 00:16:57.890 "num_base_bdevs_operational": 1, 00:16:57.890 "base_bdevs_list": [ 00:16:57.890 { 00:16:57.890 "name": null, 00:16:57.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.890 "is_configured": false, 00:16:57.890 "data_offset": 0, 00:16:57.890 "data_size": 65536 00:16:57.890 }, 00:16:57.890 { 00:16:57.890 "name": "BaseBdev2", 00:16:57.890 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:16:57.890 "is_configured": true, 00:16:57.890 "data_offset": 0, 00:16:57.890 "data_size": 65536 00:16:57.890 } 00:16:57.890 ] 00:16:57.890 }' 00:16:57.890 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.890 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.149 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:58.149 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.149 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.149 [2024-10-30 10:45:19.561669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.149 [2024-10-30 10:45:19.578138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:16:58.149 10:45:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.149 10:45:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:58.149 [2024-10-30 10:45:19.580717] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.538 "name": "raid_bdev1", 00:16:59.538 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:16:59.538 "strip_size_kb": 0, 00:16:59.538 "state": "online", 00:16:59.538 "raid_level": "raid1", 00:16:59.538 "superblock": false, 00:16:59.538 "num_base_bdevs": 2, 00:16:59.538 "num_base_bdevs_discovered": 2, 00:16:59.538 "num_base_bdevs_operational": 2, 00:16:59.538 "process": { 00:16:59.538 "type": "rebuild", 00:16:59.538 "target": "spare", 00:16:59.538 "progress": { 00:16:59.538 "blocks": 20480, 00:16:59.538 "percent": 31 00:16:59.538 } 00:16:59.538 }, 00:16:59.538 "base_bdevs_list": [ 00:16:59.538 { 00:16:59.538 "name": "spare", 00:16:59.538 "uuid": "7764408e-0985-5d64-b144-08bb34b15ed6", 00:16:59.538 "is_configured": true, 00:16:59.538 "data_offset": 0, 00:16:59.538 "data_size": 65536 00:16:59.538 }, 00:16:59.538 { 00:16:59.538 "name": "BaseBdev2", 00:16:59.538 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:16:59.538 "is_configured": true, 00:16:59.538 "data_offset": 0, 00:16:59.538 "data_size": 65536 00:16:59.538 } 00:16:59.538 ] 00:16:59.538 }' 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.538 [2024-10-30 10:45:20.742230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.538 [2024-10-30 10:45:20.789397] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:59.538 [2024-10-30 10:45:20.789509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.538 [2024-10-30 10:45:20.789535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.538 [2024-10-30 10:45:20.789551] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.538 "name": "raid_bdev1", 00:16:59.538 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:16:59.538 "strip_size_kb": 0, 00:16:59.538 "state": "online", 00:16:59.538 "raid_level": "raid1", 00:16:59.538 "superblock": false, 00:16:59.538 "num_base_bdevs": 2, 00:16:59.538 "num_base_bdevs_discovered": 1, 00:16:59.538 "num_base_bdevs_operational": 1, 00:16:59.538 "base_bdevs_list": [ 00:16:59.538 { 00:16:59.538 "name": null, 00:16:59.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.538 "is_configured": false, 00:16:59.538 "data_offset": 0, 00:16:59.538 "data_size": 65536 00:16:59.538 }, 00:16:59.538 { 00:16:59.538 "name": "BaseBdev2", 00:16:59.538 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:16:59.538 "is_configured": true, 00:16:59.538 "data_offset": 0, 00:16:59.538 "data_size": 65536 00:16:59.538 } 00:16:59.538 ] 00:16:59.538 }' 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.538 10:45:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.159 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.160 "name": "raid_bdev1", 00:17:00.160 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:17:00.160 "strip_size_kb": 0, 00:17:00.160 "state": "online", 00:17:00.160 "raid_level": "raid1", 00:17:00.160 "superblock": false, 00:17:00.160 "num_base_bdevs": 2, 00:17:00.160 "num_base_bdevs_discovered": 1, 00:17:00.160 "num_base_bdevs_operational": 1, 00:17:00.160 "base_bdevs_list": [ 00:17:00.160 { 00:17:00.160 "name": null, 00:17:00.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.160 "is_configured": false, 00:17:00.160 "data_offset": 0, 00:17:00.160 "data_size": 65536 00:17:00.160 }, 00:17:00.160 { 00:17:00.160 "name": "BaseBdev2", 00:17:00.160 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:17:00.160 "is_configured": true, 00:17:00.160 "data_offset": 0, 00:17:00.160 "data_size": 65536 00:17:00.160 } 00:17:00.160 ] 00:17:00.160 }' 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.160 [2024-10-30 10:45:21.489772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.160 [2024-10-30 10:45:21.505966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.160 10:45:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:00.160 [2024-10-30 10:45:21.508543] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.095 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.095 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.095 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.095 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.095 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.095 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.095 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.095 10:45:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.095 10:45:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.096 10:45:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.096 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.096 "name": "raid_bdev1", 00:17:01.096 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:17:01.096 "strip_size_kb": 0, 00:17:01.096 "state": "online", 00:17:01.096 "raid_level": "raid1", 00:17:01.096 "superblock": false, 00:17:01.096 "num_base_bdevs": 2, 00:17:01.096 "num_base_bdevs_discovered": 2, 00:17:01.096 "num_base_bdevs_operational": 2, 00:17:01.096 "process": { 00:17:01.096 "type": "rebuild", 00:17:01.096 "target": "spare", 00:17:01.096 "progress": { 00:17:01.096 "blocks": 20480, 00:17:01.096 "percent": 31 00:17:01.096 } 00:17:01.096 }, 00:17:01.096 "base_bdevs_list": [ 00:17:01.096 { 00:17:01.096 "name": "spare", 00:17:01.096 "uuid": "7764408e-0985-5d64-b144-08bb34b15ed6", 00:17:01.096 "is_configured": true, 00:17:01.096 "data_offset": 0, 00:17:01.096 "data_size": 65536 00:17:01.096 }, 00:17:01.096 { 00:17:01.096 "name": "BaseBdev2", 00:17:01.096 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:17:01.096 "is_configured": true, 00:17:01.096 "data_offset": 0, 00:17:01.096 "data_size": 65536 00:17:01.096 } 00:17:01.096 ] 00:17:01.096 }' 00:17:01.096 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=396 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.356 "name": "raid_bdev1", 00:17:01.356 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:17:01.356 "strip_size_kb": 0, 00:17:01.356 "state": "online", 00:17:01.356 "raid_level": "raid1", 00:17:01.356 "superblock": false, 00:17:01.356 "num_base_bdevs": 2, 00:17:01.356 "num_base_bdevs_discovered": 2, 00:17:01.356 "num_base_bdevs_operational": 2, 00:17:01.356 "process": { 00:17:01.356 "type": "rebuild", 00:17:01.356 "target": "spare", 00:17:01.356 "progress": { 00:17:01.356 "blocks": 22528, 00:17:01.356 "percent": 34 00:17:01.356 } 00:17:01.356 }, 00:17:01.356 "base_bdevs_list": [ 00:17:01.356 { 00:17:01.356 "name": "spare", 00:17:01.356 "uuid": "7764408e-0985-5d64-b144-08bb34b15ed6", 00:17:01.356 "is_configured": true, 00:17:01.356 "data_offset": 0, 00:17:01.356 "data_size": 65536 00:17:01.356 }, 00:17:01.356 { 00:17:01.356 "name": "BaseBdev2", 00:17:01.356 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:17:01.356 "is_configured": true, 00:17:01.356 "data_offset": 0, 00:17:01.356 "data_size": 65536 00:17:01.356 } 00:17:01.356 ] 00:17:01.356 }' 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.356 10:45:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.380 10:45:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.639 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.639 "name": "raid_bdev1", 00:17:02.639 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:17:02.639 "strip_size_kb": 0, 00:17:02.639 "state": "online", 00:17:02.639 "raid_level": "raid1", 00:17:02.639 "superblock": false, 00:17:02.639 "num_base_bdevs": 2, 00:17:02.639 "num_base_bdevs_discovered": 2, 00:17:02.639 "num_base_bdevs_operational": 2, 00:17:02.639 "process": { 00:17:02.639 "type": "rebuild", 00:17:02.639 "target": "spare", 00:17:02.639 "progress": { 00:17:02.639 "blocks": 47104, 00:17:02.639 "percent": 71 00:17:02.639 } 00:17:02.639 }, 00:17:02.639 "base_bdevs_list": [ 00:17:02.639 { 00:17:02.639 "name": "spare", 00:17:02.639 "uuid": "7764408e-0985-5d64-b144-08bb34b15ed6", 00:17:02.639 "is_configured": true, 00:17:02.639 "data_offset": 0, 00:17:02.639 "data_size": 65536 00:17:02.639 }, 00:17:02.639 { 00:17:02.639 "name": "BaseBdev2", 00:17:02.639 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:17:02.639 "is_configured": true, 00:17:02.639 "data_offset": 0, 00:17:02.639 "data_size": 65536 00:17:02.639 } 00:17:02.639 ] 00:17:02.639 }' 00:17:02.639 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.639 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.639 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.639 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.639 10:45:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.573 [2024-10-30 10:45:24.731364] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:03.573 [2024-10-30 10:45:24.731454] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:03.573 [2024-10-30 10:45:24.731520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.573 10:45:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.573 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.832 "name": "raid_bdev1", 00:17:03.832 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:17:03.832 "strip_size_kb": 0, 00:17:03.832 "state": "online", 00:17:03.832 "raid_level": "raid1", 00:17:03.832 "superblock": false, 00:17:03.832 "num_base_bdevs": 2, 00:17:03.832 "num_base_bdevs_discovered": 2, 00:17:03.832 "num_base_bdevs_operational": 2, 00:17:03.832 "base_bdevs_list": [ 00:17:03.832 { 00:17:03.832 "name": "spare", 00:17:03.832 "uuid": "7764408e-0985-5d64-b144-08bb34b15ed6", 00:17:03.832 "is_configured": true, 00:17:03.832 "data_offset": 0, 00:17:03.832 "data_size": 65536 00:17:03.832 }, 00:17:03.832 { 00:17:03.832 "name": "BaseBdev2", 00:17:03.832 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:17:03.832 "is_configured": true, 00:17:03.832 "data_offset": 0, 00:17:03.832 "data_size": 65536 00:17:03.832 } 00:17:03.832 ] 00:17:03.832 }' 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.832 "name": "raid_bdev1", 00:17:03.832 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:17:03.832 "strip_size_kb": 0, 00:17:03.832 "state": "online", 00:17:03.832 "raid_level": "raid1", 00:17:03.832 "superblock": false, 00:17:03.832 "num_base_bdevs": 2, 00:17:03.832 "num_base_bdevs_discovered": 2, 00:17:03.832 "num_base_bdevs_operational": 2, 00:17:03.832 "base_bdevs_list": [ 00:17:03.832 { 00:17:03.832 "name": "spare", 00:17:03.832 "uuid": "7764408e-0985-5d64-b144-08bb34b15ed6", 00:17:03.832 "is_configured": true, 00:17:03.832 "data_offset": 0, 00:17:03.832 "data_size": 65536 00:17:03.832 }, 00:17:03.832 { 00:17:03.832 "name": "BaseBdev2", 00:17:03.832 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:17:03.832 "is_configured": true, 00:17:03.832 "data_offset": 0, 00:17:03.832 "data_size": 65536 00:17:03.832 } 00:17:03.832 ] 00:17:03.832 }' 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.832 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.091 "name": "raid_bdev1", 00:17:04.091 "uuid": "c808a342-4e93-42de-b3f4-86eb9871be2d", 00:17:04.091 "strip_size_kb": 0, 00:17:04.091 "state": "online", 00:17:04.091 "raid_level": "raid1", 00:17:04.091 "superblock": false, 00:17:04.091 "num_base_bdevs": 2, 00:17:04.091 "num_base_bdevs_discovered": 2, 00:17:04.091 "num_base_bdevs_operational": 2, 00:17:04.091 "base_bdevs_list": [ 00:17:04.091 { 00:17:04.091 "name": "spare", 00:17:04.091 "uuid": "7764408e-0985-5d64-b144-08bb34b15ed6", 00:17:04.091 "is_configured": true, 00:17:04.091 "data_offset": 0, 00:17:04.091 "data_size": 65536 00:17:04.091 }, 00:17:04.091 { 00:17:04.091 "name": "BaseBdev2", 00:17:04.091 "uuid": "8ac46ae1-405c-5677-af31-e389920812d1", 00:17:04.091 "is_configured": true, 00:17:04.091 "data_offset": 0, 00:17:04.091 "data_size": 65536 00:17:04.091 } 00:17:04.091 ] 00:17:04.091 }' 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.091 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.658 [2024-10-30 10:45:25.832516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.658 [2024-10-30 10:45:25.832723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.658 [2024-10-30 10:45:25.832862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.658 [2024-10-30 10:45:25.832970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.658 [2024-10-30 10:45:25.832988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:04.658 10:45:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:04.959 /dev/nbd0 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.959 1+0 records in 00:17:04.959 1+0 records out 00:17:04.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573831 s, 7.1 MB/s 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:04.959 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:05.218 /dev/nbd1 00:17:05.218 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:05.218 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:05.218 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:05.218 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:05.218 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.219 1+0 records in 00:17:05.219 1+0 records out 00:17:05.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426841 s, 9.6 MB/s 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:05.219 10:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:05.477 10:45:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:05.477 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.477 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:05.477 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.477 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:05.477 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.477 10:45:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.736 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75646 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75646 ']' 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75646 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75646 00:17:05.995 killing process with pid 75646 00:17:05.995 Received shutdown signal, test time was about 60.000000 seconds 00:17:05.995 00:17:05.995 Latency(us) 00:17:05.995 [2024-10-30T10:45:27.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.995 [2024-10-30T10:45:27.465Z] =================================================================================================================== 00:17:05.995 [2024-10-30T10:45:27.465Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75646' 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75646 00:17:05.995 [2024-10-30 10:45:27.376839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.995 10:45:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75646 00:17:06.253 [2024-10-30 10:45:27.644438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:07.633 00:17:07.633 real 0m18.585s 00:17:07.633 user 0m20.903s 00:17:07.633 sys 0m3.516s 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:07.633 ************************************ 00:17:07.633 END TEST raid_rebuild_test 00:17:07.633 ************************************ 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.633 10:45:28 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:17:07.633 10:45:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:07.633 10:45:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:07.633 10:45:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:07.633 ************************************ 00:17:07.633 START TEST raid_rebuild_test_sb 00:17:07.633 ************************************ 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76093 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76093 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 76093 ']' 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:07.633 10:45:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.633 [2024-10-30 10:45:28.877152] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:17:07.633 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:07.633 Zero copy mechanism will not be used. 00:17:07.633 [2024-10-30 10:45:28.878151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76093 ] 00:17:07.633 [2024-10-30 10:45:29.061600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.893 [2024-10-30 10:45:29.193913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.152 [2024-10-30 10:45:29.394834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.152 [2024-10-30 10:45:29.395034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:08.409 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:08.409 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:08.409 10:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:08.409 10:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:08.409 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.409 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.667 BaseBdev1_malloc 00:17:08.667 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.667 10:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:08.667 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.667 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.667 [2024-10-30 10:45:29.901978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:08.667 [2024-10-30 10:45:29.902089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.667 [2024-10-30 10:45:29.902123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:08.667 [2024-10-30 10:45:29.902150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.667 [2024-10-30 10:45:29.904941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.667 [2024-10-30 10:45:29.905020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:08.667 BaseBdev1 00:17:08.667 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.667 10:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:08.667 10:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.668 BaseBdev2_malloc 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.668 [2024-10-30 10:45:29.954352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:08.668 [2024-10-30 10:45:29.954570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.668 [2024-10-30 10:45:29.954729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:08.668 [2024-10-30 10:45:29.954859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.668 [2024-10-30 10:45:29.957687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.668 [2024-10-30 10:45:29.957737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:08.668 BaseBdev2 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.668 10:45:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.668 spare_malloc 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.668 spare_delay 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.668 [2024-10-30 10:45:30.024112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:08.668 [2024-10-30 10:45:30.024201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.668 [2024-10-30 10:45:30.024231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:08.668 [2024-10-30 10:45:30.024250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.668 [2024-10-30 10:45:30.027007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.668 [2024-10-30 10:45:30.027056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:08.668 spare 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.668 [2024-10-30 10:45:30.032189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.668 [2024-10-30 10:45:30.034734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.668 [2024-10-30 10:45:30.035117] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:08.668 [2024-10-30 10:45:30.035162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:08.668 [2024-10-30 10:45:30.035490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:08.668 [2024-10-30 10:45:30.035702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:08.668 [2024-10-30 10:45:30.035718] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:08.668 [2024-10-30 10:45:30.035926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.668 "name": "raid_bdev1", 00:17:08.668 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:08.668 "strip_size_kb": 0, 00:17:08.668 "state": "online", 00:17:08.668 "raid_level": "raid1", 00:17:08.668 "superblock": true, 00:17:08.668 "num_base_bdevs": 2, 00:17:08.668 "num_base_bdevs_discovered": 2, 00:17:08.668 "num_base_bdevs_operational": 2, 00:17:08.668 "base_bdevs_list": [ 00:17:08.668 { 00:17:08.668 "name": "BaseBdev1", 00:17:08.668 "uuid": "7bb35c6c-277b-5e6a-bf48-47681ab9c771", 00:17:08.668 "is_configured": true, 00:17:08.668 "data_offset": 2048, 00:17:08.668 "data_size": 63488 00:17:08.668 }, 00:17:08.668 { 00:17:08.668 "name": "BaseBdev2", 00:17:08.668 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:08.668 "is_configured": true, 00:17:08.668 "data_offset": 2048, 00:17:08.668 "data_size": 63488 00:17:08.668 } 00:17:08.668 ] 00:17:08.668 }' 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.668 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:09.236 [2024-10-30 10:45:30.568704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:09.236 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:09.495 [2024-10-30 10:45:30.944521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:09.495 /dev/nbd0 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:09.755 1+0 records in 00:17:09.755 1+0 records out 00:17:09.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337727 s, 12.1 MB/s 00:17:09.755 10:45:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.755 10:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:09.755 10:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:09.755 10:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:09.755 10:45:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:09.755 10:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:09.755 10:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:09.755 10:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:09.755 10:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:09.755 10:45:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:16.351 63488+0 records in 00:17:16.351 63488+0 records out 00:17:16.351 32505856 bytes (33 MB, 31 MiB) copied, 6.21714 s, 5.2 MB/s 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:16.351 [2024-10-30 10:45:37.479541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.351 [2024-10-30 10:45:37.511641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.351 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.351 "name": "raid_bdev1", 00:17:16.351 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:16.351 "strip_size_kb": 0, 00:17:16.351 "state": "online", 00:17:16.351 "raid_level": "raid1", 00:17:16.351 "superblock": true, 00:17:16.351 "num_base_bdevs": 2, 00:17:16.351 "num_base_bdevs_discovered": 1, 00:17:16.351 "num_base_bdevs_operational": 1, 00:17:16.351 "base_bdevs_list": [ 00:17:16.351 { 00:17:16.351 "name": null, 00:17:16.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.351 "is_configured": false, 00:17:16.351 "data_offset": 0, 00:17:16.351 "data_size": 63488 00:17:16.351 }, 00:17:16.352 { 00:17:16.352 "name": "BaseBdev2", 00:17:16.352 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:16.352 "is_configured": true, 00:17:16.352 "data_offset": 2048, 00:17:16.352 "data_size": 63488 00:17:16.352 } 00:17:16.352 ] 00:17:16.352 }' 00:17:16.352 10:45:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.352 10:45:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.610 10:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:16.610 10:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.610 10:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.610 [2024-10-30 10:45:38.039916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.610 [2024-10-30 10:45:38.056926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:17:16.610 10:45:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.610 10:45:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:16.610 [2024-10-30 10:45:38.059457] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.995 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.995 "name": "raid_bdev1", 00:17:17.995 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:17.995 "strip_size_kb": 0, 00:17:17.995 "state": "online", 00:17:17.995 "raid_level": "raid1", 00:17:17.995 "superblock": true, 00:17:17.995 "num_base_bdevs": 2, 00:17:17.995 "num_base_bdevs_discovered": 2, 00:17:17.995 "num_base_bdevs_operational": 2, 00:17:17.995 "process": { 00:17:17.996 "type": "rebuild", 00:17:17.996 "target": "spare", 00:17:17.996 "progress": { 00:17:17.996 "blocks": 20480, 00:17:17.996 "percent": 32 00:17:17.996 } 00:17:17.996 }, 00:17:17.996 "base_bdevs_list": [ 00:17:17.996 { 00:17:17.996 "name": "spare", 00:17:17.996 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:17.996 "is_configured": true, 00:17:17.996 "data_offset": 2048, 00:17:17.996 "data_size": 63488 00:17:17.996 }, 00:17:17.996 { 00:17:17.996 "name": "BaseBdev2", 00:17:17.996 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:17.996 "is_configured": true, 00:17:17.996 "data_offset": 2048, 00:17:17.996 "data_size": 63488 00:17:17.996 } 00:17:17.996 ] 00:17:17.996 }' 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.996 [2024-10-30 10:45:39.216927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.996 [2024-10-30 10:45:39.268134] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.996 [2024-10-30 10:45:39.268355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.996 [2024-10-30 10:45:39.268384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.996 [2024-10-30 10:45:39.268405] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.996 "name": "raid_bdev1", 00:17:17.996 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:17.996 "strip_size_kb": 0, 00:17:17.996 "state": "online", 00:17:17.996 "raid_level": "raid1", 00:17:17.996 "superblock": true, 00:17:17.996 "num_base_bdevs": 2, 00:17:17.996 "num_base_bdevs_discovered": 1, 00:17:17.996 "num_base_bdevs_operational": 1, 00:17:17.996 "base_bdevs_list": [ 00:17:17.996 { 00:17:17.996 "name": null, 00:17:17.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.996 "is_configured": false, 00:17:17.996 "data_offset": 0, 00:17:17.996 "data_size": 63488 00:17:17.996 }, 00:17:17.996 { 00:17:17.996 "name": "BaseBdev2", 00:17:17.996 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:17.996 "is_configured": true, 00:17:17.996 "data_offset": 2048, 00:17:17.996 "data_size": 63488 00:17:17.996 } 00:17:17.996 ] 00:17:17.996 }' 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.996 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.563 "name": "raid_bdev1", 00:17:18.563 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:18.563 "strip_size_kb": 0, 00:17:18.563 "state": "online", 00:17:18.563 "raid_level": "raid1", 00:17:18.563 "superblock": true, 00:17:18.563 "num_base_bdevs": 2, 00:17:18.563 "num_base_bdevs_discovered": 1, 00:17:18.563 "num_base_bdevs_operational": 1, 00:17:18.563 "base_bdevs_list": [ 00:17:18.563 { 00:17:18.563 "name": null, 00:17:18.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.563 "is_configured": false, 00:17:18.563 "data_offset": 0, 00:17:18.563 "data_size": 63488 00:17:18.563 }, 00:17:18.563 { 00:17:18.563 "name": "BaseBdev2", 00:17:18.563 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:18.563 "is_configured": true, 00:17:18.563 "data_offset": 2048, 00:17:18.563 "data_size": 63488 00:17:18.563 } 00:17:18.563 ] 00:17:18.563 }' 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.563 [2024-10-30 10:45:39.960695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.563 [2024-10-30 10:45:39.977483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.563 10:45:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:18.563 [2024-10-30 10:45:39.980099] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.939 10:45:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.939 "name": "raid_bdev1", 00:17:19.939 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:19.939 "strip_size_kb": 0, 00:17:19.939 "state": "online", 00:17:19.939 "raid_level": "raid1", 00:17:19.939 "superblock": true, 00:17:19.939 "num_base_bdevs": 2, 00:17:19.939 "num_base_bdevs_discovered": 2, 00:17:19.939 "num_base_bdevs_operational": 2, 00:17:19.939 "process": { 00:17:19.939 "type": "rebuild", 00:17:19.939 "target": "spare", 00:17:19.939 "progress": { 00:17:19.939 "blocks": 20480, 00:17:19.939 "percent": 32 00:17:19.939 } 00:17:19.939 }, 00:17:19.939 "base_bdevs_list": [ 00:17:19.939 { 00:17:19.939 "name": "spare", 00:17:19.939 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:19.939 "is_configured": true, 00:17:19.939 "data_offset": 2048, 00:17:19.939 "data_size": 63488 00:17:19.939 }, 00:17:19.939 { 00:17:19.939 "name": "BaseBdev2", 00:17:19.939 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:19.939 "is_configured": true, 00:17:19.939 "data_offset": 2048, 00:17:19.939 "data_size": 63488 00:17:19.939 } 00:17:19.939 ] 00:17:19.939 }' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:19.939 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.939 "name": "raid_bdev1", 00:17:19.939 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:19.939 "strip_size_kb": 0, 00:17:19.939 "state": "online", 00:17:19.939 "raid_level": "raid1", 00:17:19.939 "superblock": true, 00:17:19.939 "num_base_bdevs": 2, 00:17:19.939 "num_base_bdevs_discovered": 2, 00:17:19.939 "num_base_bdevs_operational": 2, 00:17:19.939 "process": { 00:17:19.939 "type": "rebuild", 00:17:19.939 "target": "spare", 00:17:19.939 "progress": { 00:17:19.939 "blocks": 22528, 00:17:19.939 "percent": 35 00:17:19.939 } 00:17:19.939 }, 00:17:19.939 "base_bdevs_list": [ 00:17:19.939 { 00:17:19.939 "name": "spare", 00:17:19.939 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:19.939 "is_configured": true, 00:17:19.939 "data_offset": 2048, 00:17:19.939 "data_size": 63488 00:17:19.939 }, 00:17:19.939 { 00:17:19.939 "name": "BaseBdev2", 00:17:19.939 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:19.939 "is_configured": true, 00:17:19.939 "data_offset": 2048, 00:17:19.939 "data_size": 63488 00:17:19.939 } 00:17:19.939 ] 00:17:19.939 }' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.939 10:45:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.875 10:45:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.134 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.134 "name": "raid_bdev1", 00:17:21.134 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:21.134 "strip_size_kb": 0, 00:17:21.134 "state": "online", 00:17:21.134 "raid_level": "raid1", 00:17:21.134 "superblock": true, 00:17:21.134 "num_base_bdevs": 2, 00:17:21.134 "num_base_bdevs_discovered": 2, 00:17:21.134 "num_base_bdevs_operational": 2, 00:17:21.134 "process": { 00:17:21.134 "type": "rebuild", 00:17:21.134 "target": "spare", 00:17:21.134 "progress": { 00:17:21.134 "blocks": 47104, 00:17:21.134 "percent": 74 00:17:21.134 } 00:17:21.134 }, 00:17:21.134 "base_bdevs_list": [ 00:17:21.134 { 00:17:21.134 "name": "spare", 00:17:21.134 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:21.134 "is_configured": true, 00:17:21.134 "data_offset": 2048, 00:17:21.134 "data_size": 63488 00:17:21.134 }, 00:17:21.134 { 00:17:21.134 "name": "BaseBdev2", 00:17:21.134 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:21.134 "is_configured": true, 00:17:21.134 "data_offset": 2048, 00:17:21.134 "data_size": 63488 00:17:21.134 } 00:17:21.134 ] 00:17:21.134 }' 00:17:21.134 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.134 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.134 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.134 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.134 10:45:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.701 [2024-10-30 10:45:43.101705] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:21.701 [2024-10-30 10:45:43.101785] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:21.701 [2024-10-30 10:45:43.101927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.270 "name": "raid_bdev1", 00:17:22.270 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:22.270 "strip_size_kb": 0, 00:17:22.270 "state": "online", 00:17:22.270 "raid_level": "raid1", 00:17:22.270 "superblock": true, 00:17:22.270 "num_base_bdevs": 2, 00:17:22.270 "num_base_bdevs_discovered": 2, 00:17:22.270 "num_base_bdevs_operational": 2, 00:17:22.270 "base_bdevs_list": [ 00:17:22.270 { 00:17:22.270 "name": "spare", 00:17:22.270 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:22.270 "is_configured": true, 00:17:22.270 "data_offset": 2048, 00:17:22.270 "data_size": 63488 00:17:22.270 }, 00:17:22.270 { 00:17:22.270 "name": "BaseBdev2", 00:17:22.270 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:22.270 "is_configured": true, 00:17:22.270 "data_offset": 2048, 00:17:22.270 "data_size": 63488 00:17:22.270 } 00:17:22.270 ] 00:17:22.270 }' 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.270 "name": "raid_bdev1", 00:17:22.270 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:22.270 "strip_size_kb": 0, 00:17:22.270 "state": "online", 00:17:22.270 "raid_level": "raid1", 00:17:22.270 "superblock": true, 00:17:22.270 "num_base_bdevs": 2, 00:17:22.270 "num_base_bdevs_discovered": 2, 00:17:22.270 "num_base_bdevs_operational": 2, 00:17:22.270 "base_bdevs_list": [ 00:17:22.270 { 00:17:22.270 "name": "spare", 00:17:22.270 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:22.270 "is_configured": true, 00:17:22.270 "data_offset": 2048, 00:17:22.270 "data_size": 63488 00:17:22.270 }, 00:17:22.270 { 00:17:22.270 "name": "BaseBdev2", 00:17:22.270 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:22.270 "is_configured": true, 00:17:22.270 "data_offset": 2048, 00:17:22.270 "data_size": 63488 00:17:22.270 } 00:17:22.270 ] 00:17:22.270 }' 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.270 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.531 "name": "raid_bdev1", 00:17:22.531 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:22.531 "strip_size_kb": 0, 00:17:22.531 "state": "online", 00:17:22.531 "raid_level": "raid1", 00:17:22.531 "superblock": true, 00:17:22.531 "num_base_bdevs": 2, 00:17:22.531 "num_base_bdevs_discovered": 2, 00:17:22.531 "num_base_bdevs_operational": 2, 00:17:22.531 "base_bdevs_list": [ 00:17:22.531 { 00:17:22.531 "name": "spare", 00:17:22.531 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:22.531 "is_configured": true, 00:17:22.531 "data_offset": 2048, 00:17:22.531 "data_size": 63488 00:17:22.531 }, 00:17:22.531 { 00:17:22.531 "name": "BaseBdev2", 00:17:22.531 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:22.531 "is_configured": true, 00:17:22.531 "data_offset": 2048, 00:17:22.531 "data_size": 63488 00:17:22.531 } 00:17:22.531 ] 00:17:22.531 }' 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.531 10:45:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.099 [2024-10-30 10:45:44.291929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.099 [2024-10-30 10:45:44.292112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.099 [2024-10-30 10:45:44.292237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.099 [2024-10-30 10:45:44.292329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.099 [2024-10-30 10:45:44.292347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.099 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:23.357 /dev/nbd0 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.357 1+0 records in 00:17:23.357 1+0 records out 00:17:23.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360814 s, 11.4 MB/s 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.357 10:45:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:23.616 /dev/nbd1 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.616 1+0 records in 00:17:23.616 1+0 records out 00:17:23.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396577 s, 10.3 MB/s 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.616 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:23.875 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:23.875 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.875 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.875 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.875 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:23.875 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.875 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.134 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:24.393 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:24.393 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:24.393 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:24.393 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.394 [2024-10-30 10:45:45.812056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:24.394 [2024-10-30 10:45:45.812307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.394 [2024-10-30 10:45:45.812354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:24.394 [2024-10-30 10:45:45.812371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.394 [2024-10-30 10:45:45.815346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.394 spare 00:17:24.394 [2024-10-30 10:45:45.815598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:24.394 [2024-10-30 10:45:45.815743] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:24.394 [2024-10-30 10:45:45.815810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.394 [2024-10-30 10:45:45.816051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.394 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.654 [2024-10-30 10:45:45.916244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:24.654 [2024-10-30 10:45:45.916274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:24.654 [2024-10-30 10:45:45.916564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:17:24.654 [2024-10-30 10:45:45.916767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:24.654 [2024-10-30 10:45:45.916783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:24.654 [2024-10-30 10:45:45.916948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.654 "name": "raid_bdev1", 00:17:24.654 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:24.654 "strip_size_kb": 0, 00:17:24.654 "state": "online", 00:17:24.654 "raid_level": "raid1", 00:17:24.654 "superblock": true, 00:17:24.654 "num_base_bdevs": 2, 00:17:24.654 "num_base_bdevs_discovered": 2, 00:17:24.654 "num_base_bdevs_operational": 2, 00:17:24.654 "base_bdevs_list": [ 00:17:24.654 { 00:17:24.654 "name": "spare", 00:17:24.654 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:24.654 "is_configured": true, 00:17:24.654 "data_offset": 2048, 00:17:24.654 "data_size": 63488 00:17:24.654 }, 00:17:24.654 { 00:17:24.654 "name": "BaseBdev2", 00:17:24.654 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:24.654 "is_configured": true, 00:17:24.654 "data_offset": 2048, 00:17:24.654 "data_size": 63488 00:17:24.654 } 00:17:24.654 ] 00:17:24.654 }' 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.654 10:45:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.223 "name": "raid_bdev1", 00:17:25.223 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:25.223 "strip_size_kb": 0, 00:17:25.223 "state": "online", 00:17:25.223 "raid_level": "raid1", 00:17:25.223 "superblock": true, 00:17:25.223 "num_base_bdevs": 2, 00:17:25.223 "num_base_bdevs_discovered": 2, 00:17:25.223 "num_base_bdevs_operational": 2, 00:17:25.223 "base_bdevs_list": [ 00:17:25.223 { 00:17:25.223 "name": "spare", 00:17:25.223 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:25.223 "is_configured": true, 00:17:25.223 "data_offset": 2048, 00:17:25.223 "data_size": 63488 00:17:25.223 }, 00:17:25.223 { 00:17:25.223 "name": "BaseBdev2", 00:17:25.223 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:25.223 "is_configured": true, 00:17:25.223 "data_offset": 2048, 00:17:25.223 "data_size": 63488 00:17:25.223 } 00:17:25.223 ] 00:17:25.223 }' 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.223 [2024-10-30 10:45:46.640457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.223 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.482 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.482 "name": "raid_bdev1", 00:17:25.482 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:25.482 "strip_size_kb": 0, 00:17:25.482 "state": "online", 00:17:25.482 "raid_level": "raid1", 00:17:25.482 "superblock": true, 00:17:25.482 "num_base_bdevs": 2, 00:17:25.482 "num_base_bdevs_discovered": 1, 00:17:25.482 "num_base_bdevs_operational": 1, 00:17:25.482 "base_bdevs_list": [ 00:17:25.482 { 00:17:25.482 "name": null, 00:17:25.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.482 "is_configured": false, 00:17:25.482 "data_offset": 0, 00:17:25.482 "data_size": 63488 00:17:25.482 }, 00:17:25.482 { 00:17:25.482 "name": "BaseBdev2", 00:17:25.482 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:25.482 "is_configured": true, 00:17:25.482 "data_offset": 2048, 00:17:25.482 "data_size": 63488 00:17:25.482 } 00:17:25.482 ] 00:17:25.482 }' 00:17:25.482 10:45:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.482 10:45:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.742 10:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.742 10:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.742 10:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.742 [2024-10-30 10:45:47.180697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.742 [2024-10-30 10:45:47.181150] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.742 [2024-10-30 10:45:47.181184] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:25.742 [2024-10-30 10:45:47.181239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.742 [2024-10-30 10:45:47.197113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:17:25.742 10:45:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.742 10:45:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:25.742 [2024-10-30 10:45:47.199665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.120 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.120 "name": "raid_bdev1", 00:17:27.120 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:27.120 "strip_size_kb": 0, 00:17:27.120 "state": "online", 00:17:27.120 "raid_level": "raid1", 00:17:27.120 "superblock": true, 00:17:27.120 "num_base_bdevs": 2, 00:17:27.120 "num_base_bdevs_discovered": 2, 00:17:27.120 "num_base_bdevs_operational": 2, 00:17:27.120 "process": { 00:17:27.120 "type": "rebuild", 00:17:27.120 "target": "spare", 00:17:27.120 "progress": { 00:17:27.120 "blocks": 20480, 00:17:27.120 "percent": 32 00:17:27.120 } 00:17:27.120 }, 00:17:27.121 "base_bdevs_list": [ 00:17:27.121 { 00:17:27.121 "name": "spare", 00:17:27.121 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:27.121 "is_configured": true, 00:17:27.121 "data_offset": 2048, 00:17:27.121 "data_size": 63488 00:17:27.121 }, 00:17:27.121 { 00:17:27.121 "name": "BaseBdev2", 00:17:27.121 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:27.121 "is_configured": true, 00:17:27.121 "data_offset": 2048, 00:17:27.121 "data_size": 63488 00:17:27.121 } 00:17:27.121 ] 00:17:27.121 }' 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.121 [2024-10-30 10:45:48.377436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.121 [2024-10-30 10:45:48.408425] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.121 [2024-10-30 10:45:48.408520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.121 [2024-10-30 10:45:48.408546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.121 [2024-10-30 10:45:48.408561] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.121 "name": "raid_bdev1", 00:17:27.121 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:27.121 "strip_size_kb": 0, 00:17:27.121 "state": "online", 00:17:27.121 "raid_level": "raid1", 00:17:27.121 "superblock": true, 00:17:27.121 "num_base_bdevs": 2, 00:17:27.121 "num_base_bdevs_discovered": 1, 00:17:27.121 "num_base_bdevs_operational": 1, 00:17:27.121 "base_bdevs_list": [ 00:17:27.121 { 00:17:27.121 "name": null, 00:17:27.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.121 "is_configured": false, 00:17:27.121 "data_offset": 0, 00:17:27.121 "data_size": 63488 00:17:27.121 }, 00:17:27.121 { 00:17:27.121 "name": "BaseBdev2", 00:17:27.121 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:27.121 "is_configured": true, 00:17:27.121 "data_offset": 2048, 00:17:27.121 "data_size": 63488 00:17:27.121 } 00:17:27.121 ] 00:17:27.121 }' 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.121 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.724 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:27.724 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.724 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.724 [2024-10-30 10:45:48.976467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:27.724 [2024-10-30 10:45:48.976559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.724 [2024-10-30 10:45:48.976590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:27.724 [2024-10-30 10:45:48.976608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.724 [2024-10-30 10:45:48.977270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.724 [2024-10-30 10:45:48.977302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:27.724 [2024-10-30 10:45:48.977496] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:27.724 [2024-10-30 10:45:48.977529] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.724 [2024-10-30 10:45:48.977543] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:27.724 [2024-10-30 10:45:48.977588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.724 spare 00:17:27.724 [2024-10-30 10:45:48.993544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:27.724 10:45:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.724 10:45:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:27.724 [2024-10-30 10:45:48.996099] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.660 10:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.660 10:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.660 10:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.660 10:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.660 10:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.660 10:45:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.660 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.660 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.660 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.660 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.660 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.660 "name": "raid_bdev1", 00:17:28.660 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:28.660 "strip_size_kb": 0, 00:17:28.660 "state": "online", 00:17:28.660 "raid_level": "raid1", 00:17:28.660 "superblock": true, 00:17:28.660 "num_base_bdevs": 2, 00:17:28.660 "num_base_bdevs_discovered": 2, 00:17:28.660 "num_base_bdevs_operational": 2, 00:17:28.660 "process": { 00:17:28.660 "type": "rebuild", 00:17:28.660 "target": "spare", 00:17:28.660 "progress": { 00:17:28.660 "blocks": 20480, 00:17:28.660 "percent": 32 00:17:28.660 } 00:17:28.660 }, 00:17:28.660 "base_bdevs_list": [ 00:17:28.660 { 00:17:28.660 "name": "spare", 00:17:28.660 "uuid": "06f4e4c1-5e11-5e02-a01e-faf81ff3ed90", 00:17:28.660 "is_configured": true, 00:17:28.660 "data_offset": 2048, 00:17:28.660 "data_size": 63488 00:17:28.660 }, 00:17:28.660 { 00:17:28.660 "name": "BaseBdev2", 00:17:28.660 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:28.660 "is_configured": true, 00:17:28.660 "data_offset": 2048, 00:17:28.660 "data_size": 63488 00:17:28.660 } 00:17:28.660 ] 00:17:28.660 }' 00:17:28.660 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.660 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.660 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.918 [2024-10-30 10:45:50.161691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.918 [2024-10-30 10:45:50.204901] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.918 [2024-10-30 10:45:50.205186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.918 [2024-10-30 10:45:50.205221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.918 [2024-10-30 10:45:50.205235] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.918 "name": "raid_bdev1", 00:17:28.918 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:28.918 "strip_size_kb": 0, 00:17:28.918 "state": "online", 00:17:28.918 "raid_level": "raid1", 00:17:28.918 "superblock": true, 00:17:28.918 "num_base_bdevs": 2, 00:17:28.918 "num_base_bdevs_discovered": 1, 00:17:28.918 "num_base_bdevs_operational": 1, 00:17:28.918 "base_bdevs_list": [ 00:17:28.918 { 00:17:28.918 "name": null, 00:17:28.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.918 "is_configured": false, 00:17:28.918 "data_offset": 0, 00:17:28.918 "data_size": 63488 00:17:28.918 }, 00:17:28.918 { 00:17:28.918 "name": "BaseBdev2", 00:17:28.918 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:28.918 "is_configured": true, 00:17:28.918 "data_offset": 2048, 00:17:28.918 "data_size": 63488 00:17:28.918 } 00:17:28.918 ] 00:17:28.918 }' 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.918 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.485 "name": "raid_bdev1", 00:17:29.485 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:29.485 "strip_size_kb": 0, 00:17:29.485 "state": "online", 00:17:29.485 "raid_level": "raid1", 00:17:29.485 "superblock": true, 00:17:29.485 "num_base_bdevs": 2, 00:17:29.485 "num_base_bdevs_discovered": 1, 00:17:29.485 "num_base_bdevs_operational": 1, 00:17:29.485 "base_bdevs_list": [ 00:17:29.485 { 00:17:29.485 "name": null, 00:17:29.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.485 "is_configured": false, 00:17:29.485 "data_offset": 0, 00:17:29.485 "data_size": 63488 00:17:29.485 }, 00:17:29.485 { 00:17:29.485 "name": "BaseBdev2", 00:17:29.485 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:29.485 "is_configured": true, 00:17:29.485 "data_offset": 2048, 00:17:29.485 "data_size": 63488 00:17:29.485 } 00:17:29.485 ] 00:17:29.485 }' 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.485 [2024-10-30 10:45:50.916119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:29.485 [2024-10-30 10:45:50.916182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.485 [2024-10-30 10:45:50.916215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:29.485 [2024-10-30 10:45:50.916242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.485 [2024-10-30 10:45:50.916785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.485 [2024-10-30 10:45:50.916818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:29.485 [2024-10-30 10:45:50.916922] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:29.485 [2024-10-30 10:45:50.916943] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:29.485 [2024-10-30 10:45:50.916957] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:29.485 [2024-10-30 10:45:50.916985] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:29.485 BaseBdev1 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.485 10:45:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.859 "name": "raid_bdev1", 00:17:30.859 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:30.859 "strip_size_kb": 0, 00:17:30.859 "state": "online", 00:17:30.859 "raid_level": "raid1", 00:17:30.859 "superblock": true, 00:17:30.859 "num_base_bdevs": 2, 00:17:30.859 "num_base_bdevs_discovered": 1, 00:17:30.859 "num_base_bdevs_operational": 1, 00:17:30.859 "base_bdevs_list": [ 00:17:30.859 { 00:17:30.859 "name": null, 00:17:30.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.859 "is_configured": false, 00:17:30.859 "data_offset": 0, 00:17:30.859 "data_size": 63488 00:17:30.859 }, 00:17:30.859 { 00:17:30.859 "name": "BaseBdev2", 00:17:30.859 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:30.859 "is_configured": true, 00:17:30.859 "data_offset": 2048, 00:17:30.859 "data_size": 63488 00:17:30.859 } 00:17:30.859 ] 00:17:30.859 }' 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.859 10:45:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.118 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.118 "name": "raid_bdev1", 00:17:31.118 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:31.118 "strip_size_kb": 0, 00:17:31.118 "state": "online", 00:17:31.118 "raid_level": "raid1", 00:17:31.118 "superblock": true, 00:17:31.118 "num_base_bdevs": 2, 00:17:31.118 "num_base_bdevs_discovered": 1, 00:17:31.118 "num_base_bdevs_operational": 1, 00:17:31.118 "base_bdevs_list": [ 00:17:31.118 { 00:17:31.118 "name": null, 00:17:31.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.119 "is_configured": false, 00:17:31.119 "data_offset": 0, 00:17:31.119 "data_size": 63488 00:17:31.119 }, 00:17:31.119 { 00:17:31.119 "name": "BaseBdev2", 00:17:31.119 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:31.119 "is_configured": true, 00:17:31.119 "data_offset": 2048, 00:17:31.119 "data_size": 63488 00:17:31.119 } 00:17:31.119 ] 00:17:31.119 }' 00:17:31.119 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.119 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.119 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.376 [2024-10-30 10:45:52.608738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.376 [2024-10-30 10:45:52.609154] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:31.376 [2024-10-30 10:45:52.609190] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:31.376 request: 00:17:31.376 { 00:17:31.376 "base_bdev": "BaseBdev1", 00:17:31.376 "raid_bdev": "raid_bdev1", 00:17:31.376 "method": "bdev_raid_add_base_bdev", 00:17:31.376 "req_id": 1 00:17:31.376 } 00:17:31.376 Got JSON-RPC error response 00:17:31.376 response: 00:17:31.376 { 00:17:31.376 "code": -22, 00:17:31.376 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:31.376 } 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:31.376 10:45:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.309 "name": "raid_bdev1", 00:17:32.309 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:32.309 "strip_size_kb": 0, 00:17:32.309 "state": "online", 00:17:32.309 "raid_level": "raid1", 00:17:32.309 "superblock": true, 00:17:32.309 "num_base_bdevs": 2, 00:17:32.309 "num_base_bdevs_discovered": 1, 00:17:32.309 "num_base_bdevs_operational": 1, 00:17:32.309 "base_bdevs_list": [ 00:17:32.309 { 00:17:32.309 "name": null, 00:17:32.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.309 "is_configured": false, 00:17:32.309 "data_offset": 0, 00:17:32.309 "data_size": 63488 00:17:32.309 }, 00:17:32.309 { 00:17:32.309 "name": "BaseBdev2", 00:17:32.309 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:32.309 "is_configured": true, 00:17:32.309 "data_offset": 2048, 00:17:32.309 "data_size": 63488 00:17:32.309 } 00:17:32.309 ] 00:17:32.309 }' 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.309 10:45:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.875 "name": "raid_bdev1", 00:17:32.875 "uuid": "25e29ccd-d7eb-4339-8d82-07756ee97994", 00:17:32.875 "strip_size_kb": 0, 00:17:32.875 "state": "online", 00:17:32.875 "raid_level": "raid1", 00:17:32.875 "superblock": true, 00:17:32.875 "num_base_bdevs": 2, 00:17:32.875 "num_base_bdevs_discovered": 1, 00:17:32.875 "num_base_bdevs_operational": 1, 00:17:32.875 "base_bdevs_list": [ 00:17:32.875 { 00:17:32.875 "name": null, 00:17:32.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.875 "is_configured": false, 00:17:32.875 "data_offset": 0, 00:17:32.875 "data_size": 63488 00:17:32.875 }, 00:17:32.875 { 00:17:32.875 "name": "BaseBdev2", 00:17:32.875 "uuid": "730e9afd-e2e7-5814-97bd-b554d1f38199", 00:17:32.875 "is_configured": true, 00:17:32.875 "data_offset": 2048, 00:17:32.875 "data_size": 63488 00:17:32.875 } 00:17:32.875 ] 00:17:32.875 }' 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76093 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 76093 ']' 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 76093 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76093 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:32.875 killing process with pid 76093 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76093' 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 76093 00:17:32.875 Received shutdown signal, test time was about 60.000000 seconds 00:17:32.875 00:17:32.875 Latency(us) 00:17:32.875 [2024-10-30T10:45:54.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.875 [2024-10-30T10:45:54.345Z] =================================================================================================================== 00:17:32.875 [2024-10-30T10:45:54.345Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:32.875 [2024-10-30 10:45:54.344477] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.875 10:45:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 76093 00:17:33.134 [2024-10-30 10:45:54.344632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.134 [2024-10-30 10:45:54.344709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.134 [2024-10-30 10:45:54.344730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:33.393 [2024-10-30 10:45:54.604722] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.329 ************************************ 00:17:34.329 END TEST raid_rebuild_test_sb 00:17:34.329 ************************************ 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:34.329 00:17:34.329 real 0m26.835s 00:17:34.329 user 0m33.034s 00:17:34.329 sys 0m3.898s 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.329 10:45:55 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:17:34.329 10:45:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:34.329 10:45:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:34.329 10:45:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.329 ************************************ 00:17:34.329 START TEST raid_rebuild_test_io 00:17:34.329 ************************************ 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.329 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76864 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76864 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76864 ']' 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:34.330 10:45:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:34.330 [2024-10-30 10:45:55.765252] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:17:34.330 [2024-10-30 10:45:55.765758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:34.330 Zero copy mechanism will not be used. 00:17:34.330 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76864 ] 00:17:34.588 [2024-10-30 10:45:55.953053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.847 [2024-10-30 10:45:56.079190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.847 [2024-10-30 10:45:56.277412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.847 [2024-10-30 10:45:56.277485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.414 BaseBdev1_malloc 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.414 [2024-10-30 10:45:56.778632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:35.414 [2024-10-30 10:45:56.778741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.414 [2024-10-30 10:45:56.778770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:35.414 [2024-10-30 10:45:56.778793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.414 [2024-10-30 10:45:56.781673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.414 [2024-10-30 10:45:56.781907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:35.414 BaseBdev1 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.414 BaseBdev2_malloc 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.414 [2024-10-30 10:45:56.831172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:35.414 [2024-10-30 10:45:56.831245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.414 [2024-10-30 10:45:56.831272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:35.414 [2024-10-30 10:45:56.831292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.414 [2024-10-30 10:45:56.834155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.414 [2024-10-30 10:45:56.834202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:35.414 BaseBdev2 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.414 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.673 spare_malloc 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.673 spare_delay 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.673 [2024-10-30 10:45:56.906437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:35.673 [2024-10-30 10:45:56.906689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.673 [2024-10-30 10:45:56.906728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:35.673 [2024-10-30 10:45:56.906746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.673 [2024-10-30 10:45:56.909655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.673 [2024-10-30 10:45:56.909701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:35.673 spare 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.673 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.674 [2024-10-30 10:45:56.914590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.674 [2024-10-30 10:45:56.917121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.674 [2024-10-30 10:45:56.917242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:35.674 [2024-10-30 10:45:56.917269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:35.674 [2024-10-30 10:45:56.917569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:35.674 [2024-10-30 10:45:56.917749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:35.674 [2024-10-30 10:45:56.917766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:35.674 [2024-10-30 10:45:56.917936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.674 "name": "raid_bdev1", 00:17:35.674 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:35.674 "strip_size_kb": 0, 00:17:35.674 "state": "online", 00:17:35.674 "raid_level": "raid1", 00:17:35.674 "superblock": false, 00:17:35.674 "num_base_bdevs": 2, 00:17:35.674 "num_base_bdevs_discovered": 2, 00:17:35.674 "num_base_bdevs_operational": 2, 00:17:35.674 "base_bdevs_list": [ 00:17:35.674 { 00:17:35.674 "name": "BaseBdev1", 00:17:35.674 "uuid": "34db74bf-4e4c-59b6-9b90-38b873395231", 00:17:35.674 "is_configured": true, 00:17:35.674 "data_offset": 0, 00:17:35.674 "data_size": 65536 00:17:35.674 }, 00:17:35.674 { 00:17:35.674 "name": "BaseBdev2", 00:17:35.674 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:35.674 "is_configured": true, 00:17:35.674 "data_offset": 0, 00:17:35.674 "data_size": 65536 00:17:35.674 } 00:17:35.674 ] 00:17:35.674 }' 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.674 10:45:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.242 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.242 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:36.242 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.242 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.242 [2024-10-30 10:45:57.463182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.242 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.242 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:36.242 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.242 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.243 [2024-10-30 10:45:57.570763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.243 "name": "raid_bdev1", 00:17:36.243 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:36.243 "strip_size_kb": 0, 00:17:36.243 "state": "online", 00:17:36.243 "raid_level": "raid1", 00:17:36.243 "superblock": false, 00:17:36.243 "num_base_bdevs": 2, 00:17:36.243 "num_base_bdevs_discovered": 1, 00:17:36.243 "num_base_bdevs_operational": 1, 00:17:36.243 "base_bdevs_list": [ 00:17:36.243 { 00:17:36.243 "name": null, 00:17:36.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.243 "is_configured": false, 00:17:36.243 "data_offset": 0, 00:17:36.243 "data_size": 65536 00:17:36.243 }, 00:17:36.243 { 00:17:36.243 "name": "BaseBdev2", 00:17:36.243 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:36.243 "is_configured": true, 00:17:36.243 "data_offset": 0, 00:17:36.243 "data_size": 65536 00:17:36.243 } 00:17:36.243 ] 00:17:36.243 }' 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.243 10:45:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.243 [2024-10-30 10:45:57.699124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:36.243 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:36.243 Zero copy mechanism will not be used. 00:17:36.243 Running I/O for 60 seconds... 00:17:36.810 10:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:36.810 10:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.810 10:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.810 [2024-10-30 10:45:58.066965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.810 10:45:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.810 10:45:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:36.810 [2024-10-30 10:45:58.119697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:36.810 [2024-10-30 10:45:58.122435] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.810 [2024-10-30 10:45:58.248515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:37.070 [2024-10-30 10:45:58.474336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:37.070 [2024-10-30 10:45:58.474912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:37.390 143.00 IOPS, 429.00 MiB/s [2024-10-30T10:45:58.860Z] [2024-10-30 10:45:58.837460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:37.666 [2024-10-30 10:45:58.838322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:37.666 [2024-10-30 10:45:59.073570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.666 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.925 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.925 "name": "raid_bdev1", 00:17:37.925 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:37.925 "strip_size_kb": 0, 00:17:37.925 "state": "online", 00:17:37.925 "raid_level": "raid1", 00:17:37.925 "superblock": false, 00:17:37.925 "num_base_bdevs": 2, 00:17:37.925 "num_base_bdevs_discovered": 2, 00:17:37.925 "num_base_bdevs_operational": 2, 00:17:37.925 "process": { 00:17:37.925 "type": "rebuild", 00:17:37.925 "target": "spare", 00:17:37.925 "progress": { 00:17:37.925 "blocks": 10240, 00:17:37.925 "percent": 15 00:17:37.925 } 00:17:37.925 }, 00:17:37.925 "base_bdevs_list": [ 00:17:37.925 { 00:17:37.925 "name": "spare", 00:17:37.925 "uuid": "f31e7ec5-e95a-56a5-a5f4-dd8d071a8461", 00:17:37.925 "is_configured": true, 00:17:37.925 "data_offset": 0, 00:17:37.925 "data_size": 65536 00:17:37.925 }, 00:17:37.925 { 00:17:37.925 "name": "BaseBdev2", 00:17:37.925 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:37.925 "is_configured": true, 00:17:37.925 "data_offset": 0, 00:17:37.925 "data_size": 65536 00:17:37.925 } 00:17:37.925 ] 00:17:37.925 }' 00:17:37.925 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.925 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.925 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.925 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.925 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:37.925 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.925 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.925 [2024-10-30 10:45:59.285891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.925 [2024-10-30 10:45:59.300586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:38.184 [2024-10-30 10:45:59.408719] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.184 [2024-10-30 10:45:59.425717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.184 [2024-10-30 10:45:59.425926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.184 [2024-10-30 10:45:59.425952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.184 [2024-10-30 10:45:59.479901] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:38.184 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.184 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.184 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.184 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.184 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.184 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.184 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.184 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.185 "name": "raid_bdev1", 00:17:38.185 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:38.185 "strip_size_kb": 0, 00:17:38.185 "state": "online", 00:17:38.185 "raid_level": "raid1", 00:17:38.185 "superblock": false, 00:17:38.185 "num_base_bdevs": 2, 00:17:38.185 "num_base_bdevs_discovered": 1, 00:17:38.185 "num_base_bdevs_operational": 1, 00:17:38.185 "base_bdevs_list": [ 00:17:38.185 { 00:17:38.185 "name": null, 00:17:38.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.185 "is_configured": false, 00:17:38.185 "data_offset": 0, 00:17:38.185 "data_size": 65536 00:17:38.185 }, 00:17:38.185 { 00:17:38.185 "name": "BaseBdev2", 00:17:38.185 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:38.185 "is_configured": true, 00:17:38.185 "data_offset": 0, 00:17:38.185 "data_size": 65536 00:17:38.185 } 00:17:38.185 ] 00:17:38.185 }' 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.185 10:45:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.702 124.00 IOPS, 372.00 MiB/s [2024-10-30T10:46:00.172Z] 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.702 "name": "raid_bdev1", 00:17:38.702 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:38.702 "strip_size_kb": 0, 00:17:38.702 "state": "online", 00:17:38.702 "raid_level": "raid1", 00:17:38.702 "superblock": false, 00:17:38.702 "num_base_bdevs": 2, 00:17:38.702 "num_base_bdevs_discovered": 1, 00:17:38.702 "num_base_bdevs_operational": 1, 00:17:38.702 "base_bdevs_list": [ 00:17:38.702 { 00:17:38.702 "name": null, 00:17:38.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.702 "is_configured": false, 00:17:38.702 "data_offset": 0, 00:17:38.702 "data_size": 65536 00:17:38.702 }, 00:17:38.702 { 00:17:38.702 "name": "BaseBdev2", 00:17:38.702 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:38.702 "is_configured": true, 00:17:38.702 "data_offset": 0, 00:17:38.702 "data_size": 65536 00:17:38.702 } 00:17:38.702 ] 00:17:38.702 }' 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.702 10:46:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.993 [2024-10-30 10:46:00.175235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.993 10:46:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.993 10:46:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:38.993 [2024-10-30 10:46:00.232627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:38.993 [2024-10-30 10:46:00.235349] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.993 [2024-10-30 10:46:00.351837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:38.993 [2024-10-30 10:46:00.352448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:39.252 [2024-10-30 10:46:00.562826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:39.252 [2024-10-30 10:46:00.563413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:39.820 139.33 IOPS, 418.00 MiB/s [2024-10-30T10:46:01.290Z] [2024-10-30 10:46:01.013473] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:39.821 [2024-10-30 10:46:01.014038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.821 [2024-10-30 10:46:01.250821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:39.821 [2024-10-30 10:46:01.251736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.821 "name": "raid_bdev1", 00:17:39.821 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:39.821 "strip_size_kb": 0, 00:17:39.821 "state": "online", 00:17:39.821 "raid_level": "raid1", 00:17:39.821 "superblock": false, 00:17:39.821 "num_base_bdevs": 2, 00:17:39.821 "num_base_bdevs_discovered": 2, 00:17:39.821 "num_base_bdevs_operational": 2, 00:17:39.821 "process": { 00:17:39.821 "type": "rebuild", 00:17:39.821 "target": "spare", 00:17:39.821 "progress": { 00:17:39.821 "blocks": 12288, 00:17:39.821 "percent": 18 00:17:39.821 } 00:17:39.821 }, 00:17:39.821 "base_bdevs_list": [ 00:17:39.821 { 00:17:39.821 "name": "spare", 00:17:39.821 "uuid": "f31e7ec5-e95a-56a5-a5f4-dd8d071a8461", 00:17:39.821 "is_configured": true, 00:17:39.821 "data_offset": 0, 00:17:39.821 "data_size": 65536 00:17:39.821 }, 00:17:39.821 { 00:17:39.821 "name": "BaseBdev2", 00:17:39.821 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:39.821 "is_configured": true, 00:17:39.821 "data_offset": 0, 00:17:39.821 "data_size": 65536 00:17:39.821 } 00:17:39.821 ] 00:17:39.821 }' 00:17:39.821 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=435 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.080 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.080 "name": "raid_bdev1", 00:17:40.080 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:40.080 "strip_size_kb": 0, 00:17:40.080 "state": "online", 00:17:40.080 "raid_level": "raid1", 00:17:40.080 "superblock": false, 00:17:40.080 "num_base_bdevs": 2, 00:17:40.080 "num_base_bdevs_discovered": 2, 00:17:40.080 "num_base_bdevs_operational": 2, 00:17:40.080 "process": { 00:17:40.080 "type": "rebuild", 00:17:40.080 "target": "spare", 00:17:40.081 "progress": { 00:17:40.081 "blocks": 14336, 00:17:40.081 "percent": 21 00:17:40.081 } 00:17:40.081 }, 00:17:40.081 "base_bdevs_list": [ 00:17:40.081 { 00:17:40.081 "name": "spare", 00:17:40.081 "uuid": "f31e7ec5-e95a-56a5-a5f4-dd8d071a8461", 00:17:40.081 "is_configured": true, 00:17:40.081 "data_offset": 0, 00:17:40.081 "data_size": 65536 00:17:40.081 }, 00:17:40.081 { 00:17:40.081 "name": "BaseBdev2", 00:17:40.081 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:40.081 "is_configured": true, 00:17:40.081 "data_offset": 0, 00:17:40.081 "data_size": 65536 00:17:40.081 } 00:17:40.081 ] 00:17:40.081 }' 00:17:40.081 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.081 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.081 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.081 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.081 10:46:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.340 [2024-10-30 10:46:01.715437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:40.598 127.75 IOPS, 383.25 MiB/s [2024-10-30T10:46:02.068Z] [2024-10-30 10:46:01.920395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.166 "name": "raid_bdev1", 00:17:41.166 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:41.166 "strip_size_kb": 0, 00:17:41.166 "state": "online", 00:17:41.166 "raid_level": "raid1", 00:17:41.166 "superblock": false, 00:17:41.166 "num_base_bdevs": 2, 00:17:41.166 "num_base_bdevs_discovered": 2, 00:17:41.166 "num_base_bdevs_operational": 2, 00:17:41.166 "process": { 00:17:41.166 "type": "rebuild", 00:17:41.166 "target": "spare", 00:17:41.166 "progress": { 00:17:41.166 "blocks": 32768, 00:17:41.166 "percent": 50 00:17:41.166 } 00:17:41.166 }, 00:17:41.166 "base_bdevs_list": [ 00:17:41.166 { 00:17:41.166 "name": "spare", 00:17:41.166 "uuid": "f31e7ec5-e95a-56a5-a5f4-dd8d071a8461", 00:17:41.166 "is_configured": true, 00:17:41.166 "data_offset": 0, 00:17:41.166 "data_size": 65536 00:17:41.166 }, 00:17:41.166 { 00:17:41.166 "name": "BaseBdev2", 00:17:41.166 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:41.166 "is_configured": true, 00:17:41.166 "data_offset": 0, 00:17:41.166 "data_size": 65536 00:17:41.166 } 00:17:41.166 ] 00:17:41.166 }' 00:17:41.166 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.426 [2024-10-30 10:46:02.664633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 3 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.426 0720 offset_end: 36864 00:17:41.426 [2024-10-30 10:46:02.665447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:41.426 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.426 113.80 IOPS, 341.40 MiB/s [2024-10-30T10:46:02.896Z] 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.426 10:46:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.685 [2024-10-30 10:46:02.907524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:42.252 [2024-10-30 10:46:03.601227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:42.566 101.50 IOPS, 304.50 MiB/s [2024-10-30T10:46:04.036Z] 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.566 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.566 "name": "raid_bdev1", 00:17:42.567 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:42.567 "strip_size_kb": 0, 00:17:42.567 "state": "online", 00:17:42.567 "raid_level": "raid1", 00:17:42.567 "superblock": false, 00:17:42.567 "num_base_bdevs": 2, 00:17:42.567 "num_base_bdevs_discovered": 2, 00:17:42.567 "num_base_bdevs_operational": 2, 00:17:42.567 "process": { 00:17:42.567 "type": "rebuild", 00:17:42.567 "target": "spare", 00:17:42.567 "progress": { 00:17:42.567 "blocks": 53248, 00:17:42.567 "percent": 81 00:17:42.567 } 00:17:42.567 }, 00:17:42.567 "base_bdevs_list": [ 00:17:42.567 { 00:17:42.567 "name": "spare", 00:17:42.567 "uuid": "f31e7ec5-e95a-56a5-a5f4-dd8d071a8461", 00:17:42.567 "is_configured": true, 00:17:42.567 "data_offset": 0, 00:17:42.567 "data_size": 65536 00:17:42.567 }, 00:17:42.567 { 00:17:42.567 "name": "BaseBdev2", 00:17:42.567 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:42.567 "is_configured": true, 00:17:42.567 "data_offset": 0, 00:17:42.567 "data_size": 65536 00:17:42.567 } 00:17:42.567 ] 00:17:42.567 }' 00:17:42.567 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.567 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.567 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.567 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.567 10:46:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.842 [2024-10-30 10:46:04.051510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:17:43.101 [2024-10-30 10:46:04.498658] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:43.361 [2024-10-30 10:46:04.598753] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:43.361 [2024-10-30 10:46:04.601194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.620 91.71 IOPS, 275.14 MiB/s [2024-10-30T10:46:05.090Z] 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.620 "name": "raid_bdev1", 00:17:43.620 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:43.620 "strip_size_kb": 0, 00:17:43.620 "state": "online", 00:17:43.620 "raid_level": "raid1", 00:17:43.620 "superblock": false, 00:17:43.620 "num_base_bdevs": 2, 00:17:43.620 "num_base_bdevs_discovered": 2, 00:17:43.620 "num_base_bdevs_operational": 2, 00:17:43.620 "base_bdevs_list": [ 00:17:43.620 { 00:17:43.620 "name": "spare", 00:17:43.620 "uuid": "f31e7ec5-e95a-56a5-a5f4-dd8d071a8461", 00:17:43.620 "is_configured": true, 00:17:43.620 "data_offset": 0, 00:17:43.620 "data_size": 65536 00:17:43.620 }, 00:17:43.620 { 00:17:43.620 "name": "BaseBdev2", 00:17:43.620 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:43.620 "is_configured": true, 00:17:43.620 "data_offset": 0, 00:17:43.620 "data_size": 65536 00:17:43.620 } 00:17:43.620 ] 00:17:43.620 }' 00:17:43.620 10:46:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.620 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.879 "name": "raid_bdev1", 00:17:43.879 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:43.879 "strip_size_kb": 0, 00:17:43.879 "state": "online", 00:17:43.879 "raid_level": "raid1", 00:17:43.879 "superblock": false, 00:17:43.879 "num_base_bdevs": 2, 00:17:43.879 "num_base_bdevs_discovered": 2, 00:17:43.879 "num_base_bdevs_operational": 2, 00:17:43.879 "base_bdevs_list": [ 00:17:43.879 { 00:17:43.879 "name": "spare", 00:17:43.879 "uuid": "f31e7ec5-e95a-56a5-a5f4-dd8d071a8461", 00:17:43.879 "is_configured": true, 00:17:43.879 "data_offset": 0, 00:17:43.879 "data_size": 65536 00:17:43.879 }, 00:17:43.879 { 00:17:43.879 "name": "BaseBdev2", 00:17:43.879 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:43.879 "is_configured": true, 00:17:43.879 "data_offset": 0, 00:17:43.879 "data_size": 65536 00:17:43.879 } 00:17:43.879 ] 00:17:43.879 }' 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.879 "name": "raid_bdev1", 00:17:43.879 "uuid": "d454601c-569f-4ea3-9b6b-de7cd4fd9878", 00:17:43.879 "strip_size_kb": 0, 00:17:43.879 "state": "online", 00:17:43.879 "raid_level": "raid1", 00:17:43.879 "superblock": false, 00:17:43.879 "num_base_bdevs": 2, 00:17:43.879 "num_base_bdevs_discovered": 2, 00:17:43.879 "num_base_bdevs_operational": 2, 00:17:43.879 "base_bdevs_list": [ 00:17:43.879 { 00:17:43.879 "name": "spare", 00:17:43.879 "uuid": "f31e7ec5-e95a-56a5-a5f4-dd8d071a8461", 00:17:43.879 "is_configured": true, 00:17:43.879 "data_offset": 0, 00:17:43.879 "data_size": 65536 00:17:43.879 }, 00:17:43.879 { 00:17:43.879 "name": "BaseBdev2", 00:17:43.879 "uuid": "cae77902-e7f1-5869-897a-b9bde78dc7ef", 00:17:43.879 "is_configured": true, 00:17:43.879 "data_offset": 0, 00:17:43.879 "data_size": 65536 00:17:43.879 } 00:17:43.879 ] 00:17:43.879 }' 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.879 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.448 83.88 IOPS, 251.62 MiB/s [2024-10-30T10:46:05.919Z] 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.449 [2024-10-30 10:46:05.739704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.449 [2024-10-30 10:46:05.739857] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.449 00:17:44.449 Latency(us) 00:17:44.449 [2024-10-30T10:46:05.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.449 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:44.449 raid_bdev1 : 8.11 83.03 249.08 0.00 0.00 16703.88 268.10 117249.86 00:17:44.449 [2024-10-30T10:46:05.919Z] =================================================================================================================== 00:17:44.449 [2024-10-30T10:46:05.919Z] Total : 83.03 249.08 0.00 0.00 16703.88 268.10 117249.86 00:17:44.449 [2024-10-30 10:46:05.827730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.449 { 00:17:44.449 "results": [ 00:17:44.449 { 00:17:44.449 "job": "raid_bdev1", 00:17:44.449 "core_mask": "0x1", 00:17:44.449 "workload": "randrw", 00:17:44.449 "percentage": 50, 00:17:44.449 "status": "finished", 00:17:44.449 "queue_depth": 2, 00:17:44.449 "io_size": 3145728, 00:17:44.449 "runtime": 8.105701, 00:17:44.449 "iops": 83.02798240398948, 00:17:44.449 "mibps": 249.08394721196845, 00:17:44.449 "io_failed": 0, 00:17:44.449 "io_timeout": 0, 00:17:44.449 "avg_latency_us": 16703.87585573416, 00:17:44.449 "min_latency_us": 268.1018181818182, 00:17:44.449 "max_latency_us": 117249.86181818182 00:17:44.449 } 00:17:44.449 ], 00:17:44.449 "core_count": 1 00:17:44.449 } 00:17:44.449 [2024-10-30 10:46:05.827950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.449 [2024-10-30 10:46:05.828115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.449 [2024-10-30 10:46:05.828134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:44.449 10:46:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:45.016 /dev/nbd0 00:17:45.016 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:45.016 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.017 1+0 records in 00:17:45.017 1+0 records out 00:17:45.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254395 s, 16.1 MB/s 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:45.017 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:45.276 /dev/nbd1 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.276 1+0 records in 00:17:45.276 1+0 records out 00:17:45.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326017 s, 12.6 MB/s 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:45.276 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:45.535 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:45.535 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.535 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:45.535 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:45.535 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:45.535 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.535 10:46:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.794 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76864 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76864 ']' 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76864 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76864 00:17:46.053 killing process with pid 76864 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76864' 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76864 00:17:46.053 Received shutdown signal, test time was about 9.653194 seconds 00:17:46.053 00:17:46.053 Latency(us) 00:17:46.053 [2024-10-30T10:46:07.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.053 [2024-10-30T10:46:07.523Z] =================================================================================================================== 00:17:46.053 [2024-10-30T10:46:07.523Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.053 [2024-10-30 10:46:07.355122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:46.053 10:46:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76864 00:17:46.313 [2024-10-30 10:46:07.561116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.398 10:46:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:47.398 00:17:47.398 real 0m12.974s 00:17:47.398 user 0m17.009s 00:17:47.398 sys 0m1.418s 00:17:47.398 10:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:47.398 10:46:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.398 ************************************ 00:17:47.399 END TEST raid_rebuild_test_io 00:17:47.399 ************************************ 00:17:47.399 10:46:08 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:17:47.399 10:46:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:47.399 10:46:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:47.399 10:46:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:47.399 ************************************ 00:17:47.399 START TEST raid_rebuild_test_sb_io 00:17:47.399 ************************************ 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77241 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77241 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77241 ']' 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:47.399 10:46:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.399 [2024-10-30 10:46:08.785555] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:17:47.399 [2024-10-30 10:46:08.785982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:47.399 Zero copy mechanism will not be used. 00:17:47.399 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77241 ] 00:17:47.659 [2024-10-30 10:46:08.962629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.659 [2024-10-30 10:46:09.093627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.918 [2024-10-30 10:46:09.296277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.919 [2024-10-30 10:46:09.296565] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.486 BaseBdev1_malloc 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.486 [2024-10-30 10:46:09.883098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.486 [2024-10-30 10:46:09.883188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.486 [2024-10-30 10:46:09.883224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:48.486 [2024-10-30 10:46:09.883244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.486 [2024-10-30 10:46:09.886266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.486 [2024-10-30 10:46:09.886316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.486 BaseBdev1 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.486 BaseBdev2_malloc 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.486 [2024-10-30 10:46:09.941091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:48.486 [2024-10-30 10:46:09.941165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.486 [2024-10-30 10:46:09.941194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:48.486 [2024-10-30 10:46:09.941214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.486 [2024-10-30 10:46:09.943980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.486 [2024-10-30 10:46:09.944190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:48.486 BaseBdev2 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.486 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.744 spare_malloc 00:17:48.744 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.744 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:48.744 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.744 10:46:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.744 spare_delay 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.744 [2024-10-30 10:46:10.013770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:48.744 [2024-10-30 10:46:10.014049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.744 [2024-10-30 10:46:10.014090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:48.744 [2024-10-30 10:46:10.014110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.744 [2024-10-30 10:46:10.017057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.744 [2024-10-30 10:46:10.017138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:48.744 spare 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.744 [2024-10-30 10:46:10.022049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.744 [2024-10-30 10:46:10.024596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.744 [2024-10-30 10:46:10.025028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:48.744 [2024-10-30 10:46:10.025061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:48.744 [2024-10-30 10:46:10.025408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:48.744 [2024-10-30 10:46:10.025605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:48.744 [2024-10-30 10:46:10.025620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:48.744 [2024-10-30 10:46:10.025788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.744 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.744 "name": "raid_bdev1", 00:17:48.744 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:48.744 "strip_size_kb": 0, 00:17:48.744 "state": "online", 00:17:48.744 "raid_level": "raid1", 00:17:48.744 "superblock": true, 00:17:48.744 "num_base_bdevs": 2, 00:17:48.744 "num_base_bdevs_discovered": 2, 00:17:48.744 "num_base_bdevs_operational": 2, 00:17:48.744 "base_bdevs_list": [ 00:17:48.744 { 00:17:48.744 "name": "BaseBdev1", 00:17:48.744 "uuid": "8fe28dbd-a917-577b-9b28-9366d3630795", 00:17:48.744 "is_configured": true, 00:17:48.744 "data_offset": 2048, 00:17:48.744 "data_size": 63488 00:17:48.744 }, 00:17:48.744 { 00:17:48.744 "name": "BaseBdev2", 00:17:48.744 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:48.744 "is_configured": true, 00:17:48.744 "data_offset": 2048, 00:17:48.744 "data_size": 63488 00:17:48.744 } 00:17:48.744 ] 00:17:48.744 }' 00:17:48.745 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.745 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.312 [2024-10-30 10:46:10.566579] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.312 [2024-10-30 10:46:10.674225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.312 "name": "raid_bdev1", 00:17:49.312 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:49.312 "strip_size_kb": 0, 00:17:49.312 "state": "online", 00:17:49.312 "raid_level": "raid1", 00:17:49.312 "superblock": true, 00:17:49.312 "num_base_bdevs": 2, 00:17:49.312 "num_base_bdevs_discovered": 1, 00:17:49.312 "num_base_bdevs_operational": 1, 00:17:49.312 "base_bdevs_list": [ 00:17:49.312 { 00:17:49.312 "name": null, 00:17:49.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.312 "is_configured": false, 00:17:49.312 "data_offset": 0, 00:17:49.312 "data_size": 63488 00:17:49.312 }, 00:17:49.312 { 00:17:49.312 "name": "BaseBdev2", 00:17:49.312 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:49.312 "is_configured": true, 00:17:49.312 "data_offset": 2048, 00:17:49.312 "data_size": 63488 00:17:49.312 } 00:17:49.312 ] 00:17:49.312 }' 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.312 10:46:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.571 [2024-10-30 10:46:10.782593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:49.571 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:49.571 Zero copy mechanism will not be used. 00:17:49.571 Running I/O for 60 seconds... 00:17:49.830 10:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.830 10:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.830 10:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.830 [2024-10-30 10:46:11.199462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.830 10:46:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.830 10:46:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:49.830 [2024-10-30 10:46:11.255824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:49.830 [2024-10-30 10:46:11.258350] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.089 [2024-10-30 10:46:11.359987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:50.089 [2024-10-30 10:46:11.360715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:50.089 [2024-10-30 10:46:11.488846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:50.089 [2024-10-30 10:46:11.489133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:50.349 [2024-10-30 10:46:11.734222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:50.349 [2024-10-30 10:46:11.734926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:50.608 177.00 IOPS, 531.00 MiB/s [2024-10-30T10:46:12.078Z] [2024-10-30 10:46:11.960399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:50.608 [2024-10-30 10:46:11.960861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.866 [2024-10-30 10:46:12.274995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.866 "name": "raid_bdev1", 00:17:50.866 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:50.866 "strip_size_kb": 0, 00:17:50.866 "state": "online", 00:17:50.866 "raid_level": "raid1", 00:17:50.866 "superblock": true, 00:17:50.866 "num_base_bdevs": 2, 00:17:50.866 "num_base_bdevs_discovered": 2, 00:17:50.866 "num_base_bdevs_operational": 2, 00:17:50.866 "process": { 00:17:50.866 "type": "rebuild", 00:17:50.866 "target": "spare", 00:17:50.866 "progress": { 00:17:50.866 "blocks": 12288, 00:17:50.866 "percent": 19 00:17:50.866 } 00:17:50.866 }, 00:17:50.866 "base_bdevs_list": [ 00:17:50.866 { 00:17:50.866 "name": "spare", 00:17:50.866 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:50.866 "is_configured": true, 00:17:50.866 "data_offset": 2048, 00:17:50.866 "data_size": 63488 00:17:50.866 }, 00:17:50.866 { 00:17:50.866 "name": "BaseBdev2", 00:17:50.866 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:50.866 "is_configured": true, 00:17:50.866 "data_offset": 2048, 00:17:50.866 "data_size": 63488 00:17:50.866 } 00:17:50.866 ] 00:17:50.866 }' 00:17:50.866 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.125 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.125 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.125 [2024-10-30 10:46:12.377450] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:51.125 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.125 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:51.125 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.125 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.125 [2024-10-30 10:46:12.404731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.485 [2024-10-30 10:46:12.598943] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.485 [2024-10-30 10:46:12.609513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.485 [2024-10-30 10:46:12.609555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.485 [2024-10-30 10:46:12.609574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.485 [2024-10-30 10:46:12.644586] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.485 "name": "raid_bdev1", 00:17:51.485 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:51.485 "strip_size_kb": 0, 00:17:51.485 "state": "online", 00:17:51.485 "raid_level": "raid1", 00:17:51.485 "superblock": true, 00:17:51.485 "num_base_bdevs": 2, 00:17:51.485 "num_base_bdevs_discovered": 1, 00:17:51.485 "num_base_bdevs_operational": 1, 00:17:51.485 "base_bdevs_list": [ 00:17:51.485 { 00:17:51.485 "name": null, 00:17:51.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.485 "is_configured": false, 00:17:51.485 "data_offset": 0, 00:17:51.485 "data_size": 63488 00:17:51.485 }, 00:17:51.485 { 00:17:51.485 "name": "BaseBdev2", 00:17:51.485 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:51.485 "is_configured": true, 00:17:51.485 "data_offset": 2048, 00:17:51.485 "data_size": 63488 00:17:51.485 } 00:17:51.485 ] 00:17:51.485 }' 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.485 10:46:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.745 140.00 IOPS, 420.00 MiB/s [2024-10-30T10:46:13.215Z] 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.745 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.745 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.745 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.745 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.745 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.745 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.745 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.745 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.004 "name": "raid_bdev1", 00:17:52.004 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:52.004 "strip_size_kb": 0, 00:17:52.004 "state": "online", 00:17:52.004 "raid_level": "raid1", 00:17:52.004 "superblock": true, 00:17:52.004 "num_base_bdevs": 2, 00:17:52.004 "num_base_bdevs_discovered": 1, 00:17:52.004 "num_base_bdevs_operational": 1, 00:17:52.004 "base_bdevs_list": [ 00:17:52.004 { 00:17:52.004 "name": null, 00:17:52.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.004 "is_configured": false, 00:17:52.004 "data_offset": 0, 00:17:52.004 "data_size": 63488 00:17:52.004 }, 00:17:52.004 { 00:17:52.004 "name": "BaseBdev2", 00:17:52.004 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:52.004 "is_configured": true, 00:17:52.004 "data_offset": 2048, 00:17:52.004 "data_size": 63488 00:17:52.004 } 00:17:52.004 ] 00:17:52.004 }' 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:52.004 [2024-10-30 10:46:13.357766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.004 10:46:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:52.004 [2024-10-30 10:46:13.420440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:52.004 [2024-10-30 10:46:13.422872] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.263 [2024-10-30 10:46:13.542954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:52.263 [2024-10-30 10:46:13.675236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:52.263 [2024-10-30 10:46:13.675579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:52.780 160.67 IOPS, 482.00 MiB/s [2024-10-30T10:46:14.250Z] [2024-10-30 10:46:14.164927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:52.780 [2024-10-30 10:46:14.165308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.039 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.039 "name": "raid_bdev1", 00:17:53.039 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:53.039 "strip_size_kb": 0, 00:17:53.039 "state": "online", 00:17:53.039 "raid_level": "raid1", 00:17:53.039 "superblock": true, 00:17:53.039 "num_base_bdevs": 2, 00:17:53.039 "num_base_bdevs_discovered": 2, 00:17:53.039 "num_base_bdevs_operational": 2, 00:17:53.040 "process": { 00:17:53.040 "type": "rebuild", 00:17:53.040 "target": "spare", 00:17:53.040 "progress": { 00:17:53.040 "blocks": 12288, 00:17:53.040 "percent": 19 00:17:53.040 } 00:17:53.040 }, 00:17:53.040 "base_bdevs_list": [ 00:17:53.040 { 00:17:53.040 "name": "spare", 00:17:53.040 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:53.040 "is_configured": true, 00:17:53.040 "data_offset": 2048, 00:17:53.040 "data_size": 63488 00:17:53.040 }, 00:17:53.040 { 00:17:53.040 "name": "BaseBdev2", 00:17:53.040 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:53.040 "is_configured": true, 00:17:53.040 "data_offset": 2048, 00:17:53.040 "data_size": 63488 00:17:53.040 } 00:17:53.040 ] 00:17:53.040 }' 00:17:53.040 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.040 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.040 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.299 [2024-10-30 10:46:14.509138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:53.299 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.299 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.300 "name": "raid_bdev1", 00:17:53.300 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:53.300 "strip_size_kb": 0, 00:17:53.300 "state": "online", 00:17:53.300 "raid_level": "raid1", 00:17:53.300 "superblock": true, 00:17:53.300 "num_base_bdevs": 2, 00:17:53.300 "num_base_bdevs_discovered": 2, 00:17:53.300 "num_base_bdevs_operational": 2, 00:17:53.300 "process": { 00:17:53.300 "type": "rebuild", 00:17:53.300 "target": "spare", 00:17:53.300 "progress": { 00:17:53.300 "blocks": 14336, 00:17:53.300 "percent": 22 00:17:53.300 } 00:17:53.300 }, 00:17:53.300 "base_bdevs_list": [ 00:17:53.300 { 00:17:53.300 "name": "spare", 00:17:53.300 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:53.300 "is_configured": true, 00:17:53.300 "data_offset": 2048, 00:17:53.300 "data_size": 63488 00:17:53.300 }, 00:17:53.300 { 00:17:53.300 "name": "BaseBdev2", 00:17:53.300 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:53.300 "is_configured": true, 00:17:53.300 "data_offset": 2048, 00:17:53.300 "data_size": 63488 00:17:53.300 } 00:17:53.300 ] 00:17:53.300 }' 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.300 [2024-10-30 10:46:14.636136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.300 10:46:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:53.559 139.00 IOPS, 417.00 MiB/s [2024-10-30T10:46:15.029Z] [2024-10-30 10:46:14.958609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:53.817 [2024-10-30 10:46:15.169838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:54.076 [2024-10-30 10:46:15.518069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.335 "name": "raid_bdev1", 00:17:54.335 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:54.335 "strip_size_kb": 0, 00:17:54.335 "state": "online", 00:17:54.335 "raid_level": "raid1", 00:17:54.335 "superblock": true, 00:17:54.335 "num_base_bdevs": 2, 00:17:54.335 "num_base_bdevs_discovered": 2, 00:17:54.335 "num_base_bdevs_operational": 2, 00:17:54.335 "process": { 00:17:54.335 "type": "rebuild", 00:17:54.335 "target": "spare", 00:17:54.335 "progress": { 00:17:54.335 "blocks": 28672, 00:17:54.335 "percent": 45 00:17:54.335 } 00:17:54.335 }, 00:17:54.335 "base_bdevs_list": [ 00:17:54.335 { 00:17:54.335 "name": "spare", 00:17:54.335 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:54.335 "is_configured": true, 00:17:54.335 "data_offset": 2048, 00:17:54.335 "data_size": 63488 00:17:54.335 }, 00:17:54.335 { 00:17:54.335 "name": "BaseBdev2", 00:17:54.335 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:54.335 "is_configured": true, 00:17:54.335 "data_offset": 2048, 00:17:54.335 "data_size": 63488 00:17:54.335 } 00:17:54.335 ] 00:17:54.335 }' 00:17:54.335 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.595 124.60 IOPS, 373.80 MiB/s [2024-10-30T10:46:16.065Z] 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.595 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.595 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.595 10:46:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.854 [2024-10-30 10:46:16.229048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:54.854 [2024-10-30 10:46:16.229587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:55.113 [2024-10-30 10:46:16.573169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:55.372 [2024-10-30 10:46:16.796466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:55.372 [2024-10-30 10:46:16.796691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:55.631 110.17 IOPS, 330.50 MiB/s [2024-10-30T10:46:17.101Z] 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.631 "name": "raid_bdev1", 00:17:55.631 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:55.631 "strip_size_kb": 0, 00:17:55.631 "state": "online", 00:17:55.631 "raid_level": "raid1", 00:17:55.631 "superblock": true, 00:17:55.631 "num_base_bdevs": 2, 00:17:55.631 "num_base_bdevs_discovered": 2, 00:17:55.631 "num_base_bdevs_operational": 2, 00:17:55.631 "process": { 00:17:55.631 "type": "rebuild", 00:17:55.631 "target": "spare", 00:17:55.631 "progress": { 00:17:55.631 "blocks": 47104, 00:17:55.631 "percent": 74 00:17:55.631 } 00:17:55.631 }, 00:17:55.631 "base_bdevs_list": [ 00:17:55.631 { 00:17:55.631 "name": "spare", 00:17:55.631 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:55.631 "is_configured": true, 00:17:55.631 "data_offset": 2048, 00:17:55.631 "data_size": 63488 00:17:55.631 }, 00:17:55.631 { 00:17:55.631 "name": "BaseBdev2", 00:17:55.631 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:55.631 "is_configured": true, 00:17:55.631 "data_offset": 2048, 00:17:55.631 "data_size": 63488 00:17:55.631 } 00:17:55.631 ] 00:17:55.631 }' 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.631 10:46:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.631 10:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.631 10:46:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:56.567 99.86 IOPS, 299.57 MiB/s [2024-10-30T10:46:18.037Z] [2024-10-30 10:46:17.814913] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:56.567 [2024-10-30 10:46:17.921543] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:56.567 [2024-10-30 10:46:17.923843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.826 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.826 "name": "raid_bdev1", 00:17:56.826 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:56.826 "strip_size_kb": 0, 00:17:56.826 "state": "online", 00:17:56.826 "raid_level": "raid1", 00:17:56.826 "superblock": true, 00:17:56.826 "num_base_bdevs": 2, 00:17:56.826 "num_base_bdevs_discovered": 2, 00:17:56.827 "num_base_bdevs_operational": 2, 00:17:56.827 "base_bdevs_list": [ 00:17:56.827 { 00:17:56.827 "name": "spare", 00:17:56.827 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:56.827 "is_configured": true, 00:17:56.827 "data_offset": 2048, 00:17:56.827 "data_size": 63488 00:17:56.827 }, 00:17:56.827 { 00:17:56.827 "name": "BaseBdev2", 00:17:56.827 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:56.827 "is_configured": true, 00:17:56.827 "data_offset": 2048, 00:17:56.827 "data_size": 63488 00:17:56.827 } 00:17:56.827 ] 00:17:56.827 }' 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.827 "name": "raid_bdev1", 00:17:56.827 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:56.827 "strip_size_kb": 0, 00:17:56.827 "state": "online", 00:17:56.827 "raid_level": "raid1", 00:17:56.827 "superblock": true, 00:17:56.827 "num_base_bdevs": 2, 00:17:56.827 "num_base_bdevs_discovered": 2, 00:17:56.827 "num_base_bdevs_operational": 2, 00:17:56.827 "base_bdevs_list": [ 00:17:56.827 { 00:17:56.827 "name": "spare", 00:17:56.827 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:56.827 "is_configured": true, 00:17:56.827 "data_offset": 2048, 00:17:56.827 "data_size": 63488 00:17:56.827 }, 00:17:56.827 { 00:17:56.827 "name": "BaseBdev2", 00:17:56.827 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:56.827 "is_configured": true, 00:17:56.827 "data_offset": 2048, 00:17:56.827 "data_size": 63488 00:17:56.827 } 00:17:56.827 ] 00:17:56.827 }' 00:17:56.827 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.086 "name": "raid_bdev1", 00:17:57.086 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:57.086 "strip_size_kb": 0, 00:17:57.086 "state": "online", 00:17:57.086 "raid_level": "raid1", 00:17:57.086 "superblock": true, 00:17:57.086 "num_base_bdevs": 2, 00:17:57.086 "num_base_bdevs_discovered": 2, 00:17:57.086 "num_base_bdevs_operational": 2, 00:17:57.086 "base_bdevs_list": [ 00:17:57.086 { 00:17:57.086 "name": "spare", 00:17:57.086 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:57.086 "is_configured": true, 00:17:57.086 "data_offset": 2048, 00:17:57.086 "data_size": 63488 00:17:57.086 }, 00:17:57.086 { 00:17:57.086 "name": "BaseBdev2", 00:17:57.086 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:57.086 "is_configured": true, 00:17:57.086 "data_offset": 2048, 00:17:57.086 "data_size": 63488 00:17:57.086 } 00:17:57.086 ] 00:17:57.086 }' 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.086 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.604 92.00 IOPS, 276.00 MiB/s [2024-10-30T10:46:19.074Z] 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.604 [2024-10-30 10:46:18.876460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.604 [2024-10-30 10:46:18.876511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.604 00:17:57.604 Latency(us) 00:17:57.604 [2024-10-30T10:46:19.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.604 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:57.604 raid_bdev1 : 8.13 90.81 272.43 0.00 0.00 14568.97 260.65 117249.86 00:17:57.604 [2024-10-30T10:46:19.074Z] =================================================================================================================== 00:17:57.604 [2024-10-30T10:46:19.074Z] Total : 90.81 272.43 0.00 0.00 14568.97 260.65 117249.86 00:17:57.604 [2024-10-30 10:46:18.929638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.604 [2024-10-30 10:46:18.929849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.604 [2024-10-30 10:46:18.930013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.604 [2024-10-30 10:46:18.930209] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, sta{ 00:17:57.604 "results": [ 00:17:57.604 { 00:17:57.604 "job": "raid_bdev1", 00:17:57.604 "core_mask": "0x1", 00:17:57.604 "workload": "randrw", 00:17:57.604 "percentage": 50, 00:17:57.604 "status": "finished", 00:17:57.604 "queue_depth": 2, 00:17:57.604 "io_size": 3145728, 00:17:57.604 "runtime": 8.126766, 00:17:57.604 "iops": 90.81103110388561, 00:17:57.604 "mibps": 272.4330933116568, 00:17:57.604 "io_failed": 0, 00:17:57.604 "io_timeout": 0, 00:17:57.604 "avg_latency_us": 14568.966937669376, 00:17:57.604 "min_latency_us": 260.6545454545454, 00:17:57.604 "max_latency_us": 117249.86181818182 00:17:57.604 } 00:17:57.604 ], 00:17:57.604 "core_count": 1 00:17:57.604 } 00:17:57.604 te offline 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:57.604 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:57.605 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:57.605 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:57.605 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.605 10:46:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:57.863 /dev/nbd0 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:57.863 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:58.123 1+0 records in 00:17:58.123 1+0 records out 00:17:58.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612831 s, 6.7 MB/s 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:58.123 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:58.382 /dev/nbd1 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:58.382 1+0 records in 00:17:58.382 1+0 records out 00:17:58.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605226 s, 6.8 MB/s 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:58.382 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:58.641 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:58.641 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:58.641 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:58.641 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:58.641 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:58.641 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.641 10:46:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.899 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.158 [2024-10-30 10:46:20.473708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:59.158 [2024-10-30 10:46:20.473805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.158 [2024-10-30 10:46:20.473847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:59.158 [2024-10-30 10:46:20.473865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.158 [2024-10-30 10:46:20.476839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.158 [2024-10-30 10:46:20.477029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:59.158 [2024-10-30 10:46:20.477154] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:59.158 [2024-10-30 10:46:20.477232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.158 [2024-10-30 10:46:20.477406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:59.158 spare 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.158 [2024-10-30 10:46:20.577545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:59.158 [2024-10-30 10:46:20.577590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:59.158 [2024-10-30 10:46:20.578029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:17:59.158 [2024-10-30 10:46:20.578301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:59.158 [2024-10-30 10:46:20.578322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:59.158 [2024-10-30 10:46:20.578583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.158 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.417 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.417 "name": "raid_bdev1", 00:17:59.417 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:59.417 "strip_size_kb": 0, 00:17:59.417 "state": "online", 00:17:59.417 "raid_level": "raid1", 00:17:59.417 "superblock": true, 00:17:59.417 "num_base_bdevs": 2, 00:17:59.417 "num_base_bdevs_discovered": 2, 00:17:59.417 "num_base_bdevs_operational": 2, 00:17:59.417 "base_bdevs_list": [ 00:17:59.417 { 00:17:59.417 "name": "spare", 00:17:59.417 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:59.417 "is_configured": true, 00:17:59.417 "data_offset": 2048, 00:17:59.417 "data_size": 63488 00:17:59.417 }, 00:17:59.417 { 00:17:59.417 "name": "BaseBdev2", 00:17:59.417 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:59.417 "is_configured": true, 00:17:59.417 "data_offset": 2048, 00:17:59.417 "data_size": 63488 00:17:59.417 } 00:17:59.417 ] 00:17:59.417 }' 00:17:59.417 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.417 10:46:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.720 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.720 "name": "raid_bdev1", 00:17:59.720 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:59.720 "strip_size_kb": 0, 00:17:59.720 "state": "online", 00:17:59.721 "raid_level": "raid1", 00:17:59.721 "superblock": true, 00:17:59.721 "num_base_bdevs": 2, 00:17:59.721 "num_base_bdevs_discovered": 2, 00:17:59.721 "num_base_bdevs_operational": 2, 00:17:59.721 "base_bdevs_list": [ 00:17:59.721 { 00:17:59.721 "name": "spare", 00:17:59.721 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:17:59.721 "is_configured": true, 00:17:59.721 "data_offset": 2048, 00:17:59.721 "data_size": 63488 00:17:59.721 }, 00:17:59.721 { 00:17:59.721 "name": "BaseBdev2", 00:17:59.721 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:59.721 "is_configured": true, 00:17:59.721 "data_offset": 2048, 00:17:59.721 "data_size": 63488 00:17:59.721 } 00:17:59.721 ] 00:17:59.721 }' 00:17:59.721 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.981 [2024-10-30 10:46:21.302846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.981 "name": "raid_bdev1", 00:17:59.981 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:17:59.981 "strip_size_kb": 0, 00:17:59.981 "state": "online", 00:17:59.981 "raid_level": "raid1", 00:17:59.981 "superblock": true, 00:17:59.981 "num_base_bdevs": 2, 00:17:59.981 "num_base_bdevs_discovered": 1, 00:17:59.981 "num_base_bdevs_operational": 1, 00:17:59.981 "base_bdevs_list": [ 00:17:59.981 { 00:17:59.981 "name": null, 00:17:59.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.981 "is_configured": false, 00:17:59.981 "data_offset": 0, 00:17:59.981 "data_size": 63488 00:17:59.981 }, 00:17:59.981 { 00:17:59.981 "name": "BaseBdev2", 00:17:59.981 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:17:59.981 "is_configured": true, 00:17:59.981 "data_offset": 2048, 00:17:59.981 "data_size": 63488 00:17:59.981 } 00:17:59.981 ] 00:17:59.981 }' 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.981 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.548 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:00.548 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.548 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.548 [2024-10-30 10:46:21.819152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.548 [2024-10-30 10:46:21.819407] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:00.548 [2024-10-30 10:46:21.819429] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:00.548 [2024-10-30 10:46:21.819540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.548 [2024-10-30 10:46:21.835696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:18:00.548 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.548 10:46:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:00.548 [2024-10-30 10:46:21.838415] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.485 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.485 "name": "raid_bdev1", 00:18:01.485 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:18:01.485 "strip_size_kb": 0, 00:18:01.485 "state": "online", 00:18:01.485 "raid_level": "raid1", 00:18:01.485 "superblock": true, 00:18:01.485 "num_base_bdevs": 2, 00:18:01.485 "num_base_bdevs_discovered": 2, 00:18:01.485 "num_base_bdevs_operational": 2, 00:18:01.485 "process": { 00:18:01.485 "type": "rebuild", 00:18:01.485 "target": "spare", 00:18:01.485 "progress": { 00:18:01.485 "blocks": 20480, 00:18:01.485 "percent": 32 00:18:01.485 } 00:18:01.485 }, 00:18:01.486 "base_bdevs_list": [ 00:18:01.486 { 00:18:01.486 "name": "spare", 00:18:01.486 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:18:01.486 "is_configured": true, 00:18:01.486 "data_offset": 2048, 00:18:01.486 "data_size": 63488 00:18:01.486 }, 00:18:01.486 { 00:18:01.486 "name": "BaseBdev2", 00:18:01.486 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:18:01.486 "is_configured": true, 00:18:01.486 "data_offset": 2048, 00:18:01.486 "data_size": 63488 00:18:01.486 } 00:18:01.486 ] 00:18:01.486 }' 00:18:01.486 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.486 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.486 10:46:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.745 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.745 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:01.745 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.745 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.745 [2024-10-30 10:46:23.008196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.745 [2024-10-30 10:46:23.047361] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:01.745 [2024-10-30 10:46:23.047679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.745 [2024-10-30 10:46:23.047813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.746 [2024-10-30 10:46:23.047866] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.746 "name": "raid_bdev1", 00:18:01.746 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:18:01.746 "strip_size_kb": 0, 00:18:01.746 "state": "online", 00:18:01.746 "raid_level": "raid1", 00:18:01.746 "superblock": true, 00:18:01.746 "num_base_bdevs": 2, 00:18:01.746 "num_base_bdevs_discovered": 1, 00:18:01.746 "num_base_bdevs_operational": 1, 00:18:01.746 "base_bdevs_list": [ 00:18:01.746 { 00:18:01.746 "name": null, 00:18:01.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.746 "is_configured": false, 00:18:01.746 "data_offset": 0, 00:18:01.746 "data_size": 63488 00:18:01.746 }, 00:18:01.746 { 00:18:01.746 "name": "BaseBdev2", 00:18:01.746 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:18:01.746 "is_configured": true, 00:18:01.746 "data_offset": 2048, 00:18:01.746 "data_size": 63488 00:18:01.746 } 00:18:01.746 ] 00:18:01.746 }' 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.746 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.311 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:02.311 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.311 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.311 [2024-10-30 10:46:23.622866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:02.311 [2024-10-30 10:46:23.622996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.311 [2024-10-30 10:46:23.623071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:02.311 [2024-10-30 10:46:23.623087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.311 [2024-10-30 10:46:23.623734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.311 [2024-10-30 10:46:23.623764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:02.311 [2024-10-30 10:46:23.623902] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:02.311 [2024-10-30 10:46:23.623922] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:02.311 [2024-10-30 10:46:23.623939] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:02.311 [2024-10-30 10:46:23.623966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:02.311 [2024-10-30 10:46:23.640333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:18:02.311 spare 00:18:02.311 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.311 10:46:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:02.311 [2024-10-30 10:46:23.642875] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.244 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.244 "name": "raid_bdev1", 00:18:03.244 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:18:03.244 "strip_size_kb": 0, 00:18:03.244 "state": "online", 00:18:03.244 "raid_level": "raid1", 00:18:03.244 "superblock": true, 00:18:03.244 "num_base_bdevs": 2, 00:18:03.244 "num_base_bdevs_discovered": 2, 00:18:03.244 "num_base_bdevs_operational": 2, 00:18:03.244 "process": { 00:18:03.245 "type": "rebuild", 00:18:03.245 "target": "spare", 00:18:03.245 "progress": { 00:18:03.245 "blocks": 20480, 00:18:03.245 "percent": 32 00:18:03.245 } 00:18:03.245 }, 00:18:03.245 "base_bdevs_list": [ 00:18:03.245 { 00:18:03.245 "name": "spare", 00:18:03.245 "uuid": "f140c3e7-a4a4-5bf5-b5f6-717235f7c68e", 00:18:03.245 "is_configured": true, 00:18:03.245 "data_offset": 2048, 00:18:03.245 "data_size": 63488 00:18:03.245 }, 00:18:03.245 { 00:18:03.245 "name": "BaseBdev2", 00:18:03.245 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:18:03.245 "is_configured": true, 00:18:03.245 "data_offset": 2048, 00:18:03.245 "data_size": 63488 00:18:03.245 } 00:18:03.245 ] 00:18:03.245 }' 00:18:03.245 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.502 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.503 [2024-10-30 10:46:24.804647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.503 [2024-10-30 10:46:24.851704] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:03.503 [2024-10-30 10:46:24.851804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.503 [2024-10-30 10:46:24.851827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.503 [2024-10-30 10:46:24.851843] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.503 "name": "raid_bdev1", 00:18:03.503 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:18:03.503 "strip_size_kb": 0, 00:18:03.503 "state": "online", 00:18:03.503 "raid_level": "raid1", 00:18:03.503 "superblock": true, 00:18:03.503 "num_base_bdevs": 2, 00:18:03.503 "num_base_bdevs_discovered": 1, 00:18:03.503 "num_base_bdevs_operational": 1, 00:18:03.503 "base_bdevs_list": [ 00:18:03.503 { 00:18:03.503 "name": null, 00:18:03.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.503 "is_configured": false, 00:18:03.503 "data_offset": 0, 00:18:03.503 "data_size": 63488 00:18:03.503 }, 00:18:03.503 { 00:18:03.503 "name": "BaseBdev2", 00:18:03.503 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:18:03.503 "is_configured": true, 00:18:03.503 "data_offset": 2048, 00:18:03.503 "data_size": 63488 00:18:03.503 } 00:18:03.503 ] 00:18:03.503 }' 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.503 10:46:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.068 "name": "raid_bdev1", 00:18:04.068 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:18:04.068 "strip_size_kb": 0, 00:18:04.068 "state": "online", 00:18:04.068 "raid_level": "raid1", 00:18:04.068 "superblock": true, 00:18:04.068 "num_base_bdevs": 2, 00:18:04.068 "num_base_bdevs_discovered": 1, 00:18:04.068 "num_base_bdevs_operational": 1, 00:18:04.068 "base_bdevs_list": [ 00:18:04.068 { 00:18:04.068 "name": null, 00:18:04.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.068 "is_configured": false, 00:18:04.068 "data_offset": 0, 00:18:04.068 "data_size": 63488 00:18:04.068 }, 00:18:04.068 { 00:18:04.068 "name": "BaseBdev2", 00:18:04.068 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:18:04.068 "is_configured": true, 00:18:04.068 "data_offset": 2048, 00:18:04.068 "data_size": 63488 00:18:04.068 } 00:18:04.068 ] 00:18:04.068 }' 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.068 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:04.335 [2024-10-30 10:46:25.570065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:04.335 [2024-10-30 10:46:25.570177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.335 [2024-10-30 10:46:25.570211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:04.335 [2024-10-30 10:46:25.570231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.335 [2024-10-30 10:46:25.570808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.335 [2024-10-30 10:46:25.570843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:04.335 [2024-10-30 10:46:25.570934] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:04.335 [2024-10-30 10:46:25.570968] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.335 [2024-10-30 10:46:25.571012] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:04.335 [2024-10-30 10:46:25.571029] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:04.335 BaseBdev1 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.335 10:46:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.292 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.292 "name": "raid_bdev1", 00:18:05.292 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:18:05.292 "strip_size_kb": 0, 00:18:05.292 "state": "online", 00:18:05.292 "raid_level": "raid1", 00:18:05.292 "superblock": true, 00:18:05.293 "num_base_bdevs": 2, 00:18:05.293 "num_base_bdevs_discovered": 1, 00:18:05.293 "num_base_bdevs_operational": 1, 00:18:05.293 "base_bdevs_list": [ 00:18:05.293 { 00:18:05.293 "name": null, 00:18:05.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.293 "is_configured": false, 00:18:05.293 "data_offset": 0, 00:18:05.293 "data_size": 63488 00:18:05.293 }, 00:18:05.293 { 00:18:05.293 "name": "BaseBdev2", 00:18:05.293 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:18:05.293 "is_configured": true, 00:18:05.293 "data_offset": 2048, 00:18:05.293 "data_size": 63488 00:18:05.293 } 00:18:05.293 ] 00:18:05.293 }' 00:18:05.293 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.293 10:46:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.861 "name": "raid_bdev1", 00:18:05.861 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:18:05.861 "strip_size_kb": 0, 00:18:05.861 "state": "online", 00:18:05.861 "raid_level": "raid1", 00:18:05.861 "superblock": true, 00:18:05.861 "num_base_bdevs": 2, 00:18:05.861 "num_base_bdevs_discovered": 1, 00:18:05.861 "num_base_bdevs_operational": 1, 00:18:05.861 "base_bdevs_list": [ 00:18:05.861 { 00:18:05.861 "name": null, 00:18:05.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.861 "is_configured": false, 00:18:05.861 "data_offset": 0, 00:18:05.861 "data_size": 63488 00:18:05.861 }, 00:18:05.861 { 00:18:05.861 "name": "BaseBdev2", 00:18:05.861 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:18:05.861 "is_configured": true, 00:18:05.861 "data_offset": 2048, 00:18:05.861 "data_size": 63488 00:18:05.861 } 00:18:05.861 ] 00:18:05.861 }' 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:05.861 [2024-10-30 10:46:27.282833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.861 [2024-10-30 10:46:27.283102] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:05.861 [2024-10-30 10:46:27.283122] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:05.861 request: 00:18:05.861 { 00:18:05.861 "base_bdev": "BaseBdev1", 00:18:05.861 "raid_bdev": "raid_bdev1", 00:18:05.861 "method": "bdev_raid_add_base_bdev", 00:18:05.861 "req_id": 1 00:18:05.861 } 00:18:05.861 Got JSON-RPC error response 00:18:05.861 response: 00:18:05.861 { 00:18:05.861 "code": -22, 00:18:05.861 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:05.861 } 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.861 10:46:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.237 "name": "raid_bdev1", 00:18:07.237 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:18:07.237 "strip_size_kb": 0, 00:18:07.237 "state": "online", 00:18:07.237 "raid_level": "raid1", 00:18:07.237 "superblock": true, 00:18:07.237 "num_base_bdevs": 2, 00:18:07.237 "num_base_bdevs_discovered": 1, 00:18:07.237 "num_base_bdevs_operational": 1, 00:18:07.237 "base_bdevs_list": [ 00:18:07.237 { 00:18:07.237 "name": null, 00:18:07.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.237 "is_configured": false, 00:18:07.237 "data_offset": 0, 00:18:07.237 "data_size": 63488 00:18:07.237 }, 00:18:07.237 { 00:18:07.237 "name": "BaseBdev2", 00:18:07.237 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:18:07.237 "is_configured": true, 00:18:07.237 "data_offset": 2048, 00:18:07.237 "data_size": 63488 00:18:07.237 } 00:18:07.237 ] 00:18:07.237 }' 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.237 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.496 "name": "raid_bdev1", 00:18:07.496 "uuid": "3c92e94f-f1a1-4e12-baaf-283fec549f16", 00:18:07.496 "strip_size_kb": 0, 00:18:07.496 "state": "online", 00:18:07.496 "raid_level": "raid1", 00:18:07.496 "superblock": true, 00:18:07.496 "num_base_bdevs": 2, 00:18:07.496 "num_base_bdevs_discovered": 1, 00:18:07.496 "num_base_bdevs_operational": 1, 00:18:07.496 "base_bdevs_list": [ 00:18:07.496 { 00:18:07.496 "name": null, 00:18:07.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.496 "is_configured": false, 00:18:07.496 "data_offset": 0, 00:18:07.496 "data_size": 63488 00:18:07.496 }, 00:18:07.496 { 00:18:07.496 "name": "BaseBdev2", 00:18:07.496 "uuid": "88a3eec1-f0e1-519f-9454-20ef4595ca17", 00:18:07.496 "is_configured": true, 00:18:07.496 "data_offset": 2048, 00:18:07.496 "data_size": 63488 00:18:07.496 } 00:18:07.496 ] 00:18:07.496 }' 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.496 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.755 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.755 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77241 00:18:07.755 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77241 ']' 00:18:07.755 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77241 00:18:07.755 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:18:07.755 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:07.755 10:46:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77241 00:18:07.755 10:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:07.755 killing process with pid 77241 00:18:07.755 Received shutdown signal, test time was about 18.221166 seconds 00:18:07.755 00:18:07.755 Latency(us) 00:18:07.755 [2024-10-30T10:46:29.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.755 [2024-10-30T10:46:29.225Z] =================================================================================================================== 00:18:07.755 [2024-10-30T10:46:29.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.755 10:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:07.755 10:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77241' 00:18:07.755 10:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77241 00:18:07.755 [2024-10-30 10:46:29.006760] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.755 10:46:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77241 00:18:07.755 [2024-10-30 10:46:29.006917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.755 [2024-10-30 10:46:29.007011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.755 [2024-10-30 10:46:29.007042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:07.755 [2024-10-30 10:46:29.200470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:09.133 ************************************ 00:18:09.133 END TEST raid_rebuild_test_sb_io 00:18:09.133 ************************************ 00:18:09.133 00:18:09.133 real 0m21.544s 00:18:09.133 user 0m29.426s 00:18:09.133 sys 0m2.031s 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:09.133 10:46:30 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:18:09.133 10:46:30 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:18:09.133 10:46:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:09.133 10:46:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:09.133 10:46:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.133 ************************************ 00:18:09.133 START TEST raid_rebuild_test 00:18:09.133 ************************************ 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77941 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77941 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77941 ']' 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:09.133 10:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.133 [2024-10-30 10:46:30.393589] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:18:09.133 [2024-10-30 10:46:30.393942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77941 ] 00:18:09.133 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:09.133 Zero copy mechanism will not be used. 00:18:09.133 [2024-10-30 10:46:30.567848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.393 [2024-10-30 10:46:30.688390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.653 [2024-10-30 10:46:30.891394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.653 [2024-10-30 10:46:30.891458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.221 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:10.221 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:18:10.221 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.221 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:10.221 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 BaseBdev1_malloc 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 [2024-10-30 10:46:31.441012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:10.222 [2024-10-30 10:46:31.441118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.222 [2024-10-30 10:46:31.441152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:10.222 [2024-10-30 10:46:31.441172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.222 [2024-10-30 10:46:31.444117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.222 [2024-10-30 10:46:31.444341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:10.222 BaseBdev1 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 BaseBdev2_malloc 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 [2024-10-30 10:46:31.496084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:10.222 [2024-10-30 10:46:31.496219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.222 [2024-10-30 10:46:31.496247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:10.222 [2024-10-30 10:46:31.496282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.222 [2024-10-30 10:46:31.498923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.222 [2024-10-30 10:46:31.498967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:10.222 BaseBdev2 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 BaseBdev3_malloc 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 [2024-10-30 10:46:31.559979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:10.222 [2024-10-30 10:46:31.560122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.222 [2024-10-30 10:46:31.560162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:10.222 [2024-10-30 10:46:31.560181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.222 [2024-10-30 10:46:31.563119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.222 [2024-10-30 10:46:31.563216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:10.222 BaseBdev3 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 BaseBdev4_malloc 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 [2024-10-30 10:46:31.611764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:10.222 [2024-10-30 10:46:31.611858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.222 [2024-10-30 10:46:31.611885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:10.222 [2024-10-30 10:46:31.611902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.222 [2024-10-30 10:46:31.614819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.222 [2024-10-30 10:46:31.614884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:10.222 BaseBdev4 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 spare_malloc 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 spare_delay 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 [2024-10-30 10:46:31.670564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.222 [2024-10-30 10:46:31.670653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.222 [2024-10-30 10:46:31.670680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:10.222 [2024-10-30 10:46:31.670697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.222 [2024-10-30 10:46:31.673595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.222 [2024-10-30 10:46:31.673657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.222 spare 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.222 [2024-10-30 10:46:31.678603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.222 [2024-10-30 10:46:31.681269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.222 [2024-10-30 10:46:31.681374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:10.222 [2024-10-30 10:46:31.681449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:10.222 [2024-10-30 10:46:31.681570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:10.222 [2024-10-30 10:46:31.681589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:10.222 [2024-10-30 10:46:31.681891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:10.222 [2024-10-30 10:46:31.682316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:10.222 [2024-10-30 10:46:31.682375] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:10.222 [2024-10-30 10:46:31.682810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.222 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.223 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.512 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.512 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.512 "name": "raid_bdev1", 00:18:10.512 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:10.512 "strip_size_kb": 0, 00:18:10.512 "state": "online", 00:18:10.512 "raid_level": "raid1", 00:18:10.512 "superblock": false, 00:18:10.512 "num_base_bdevs": 4, 00:18:10.512 "num_base_bdevs_discovered": 4, 00:18:10.512 "num_base_bdevs_operational": 4, 00:18:10.512 "base_bdevs_list": [ 00:18:10.512 { 00:18:10.512 "name": "BaseBdev1", 00:18:10.512 "uuid": "984fd1b5-8b37-5429-a53b-f79729548a02", 00:18:10.512 "is_configured": true, 00:18:10.512 "data_offset": 0, 00:18:10.512 "data_size": 65536 00:18:10.512 }, 00:18:10.512 { 00:18:10.512 "name": "BaseBdev2", 00:18:10.512 "uuid": "87e85659-0ddb-5488-bd5c-8dfb930eb7c2", 00:18:10.512 "is_configured": true, 00:18:10.512 "data_offset": 0, 00:18:10.512 "data_size": 65536 00:18:10.512 }, 00:18:10.512 { 00:18:10.512 "name": "BaseBdev3", 00:18:10.512 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:10.512 "is_configured": true, 00:18:10.512 "data_offset": 0, 00:18:10.512 "data_size": 65536 00:18:10.512 }, 00:18:10.512 { 00:18:10.512 "name": "BaseBdev4", 00:18:10.512 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:10.512 "is_configured": true, 00:18:10.512 "data_offset": 0, 00:18:10.512 "data_size": 65536 00:18:10.512 } 00:18:10.512 ] 00:18:10.512 }' 00:18:10.512 10:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.512 10:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.784 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.784 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:10.784 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.784 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.784 [2024-10-30 10:46:32.227441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.784 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:11.043 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:11.302 [2024-10-30 10:46:32.627189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:11.302 /dev/nbd0 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:11.302 1+0 records in 00:18:11.302 1+0 records out 00:18:11.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418262 s, 9.8 MB/s 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:11.302 10:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:19.422 65536+0 records in 00:18:19.422 65536+0 records out 00:18:19.422 33554432 bytes (34 MB, 32 MiB) copied, 8.08821 s, 4.1 MB/s 00:18:19.422 10:46:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:19.422 10:46:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.422 10:46:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:19.422 10:46:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.422 10:46:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:19.422 10:46:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.422 10:46:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.682 [2024-10-30 10:46:41.085541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.682 [2024-10-30 10:46:41.101643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.682 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.941 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.941 "name": "raid_bdev1", 00:18:19.941 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:19.941 "strip_size_kb": 0, 00:18:19.941 "state": "online", 00:18:19.941 "raid_level": "raid1", 00:18:19.941 "superblock": false, 00:18:19.941 "num_base_bdevs": 4, 00:18:19.941 "num_base_bdevs_discovered": 3, 00:18:19.941 "num_base_bdevs_operational": 3, 00:18:19.941 "base_bdevs_list": [ 00:18:19.941 { 00:18:19.941 "name": null, 00:18:19.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.941 "is_configured": false, 00:18:19.941 "data_offset": 0, 00:18:19.941 "data_size": 65536 00:18:19.941 }, 00:18:19.941 { 00:18:19.941 "name": "BaseBdev2", 00:18:19.941 "uuid": "87e85659-0ddb-5488-bd5c-8dfb930eb7c2", 00:18:19.941 "is_configured": true, 00:18:19.941 "data_offset": 0, 00:18:19.941 "data_size": 65536 00:18:19.941 }, 00:18:19.941 { 00:18:19.941 "name": "BaseBdev3", 00:18:19.941 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:19.941 "is_configured": true, 00:18:19.941 "data_offset": 0, 00:18:19.941 "data_size": 65536 00:18:19.941 }, 00:18:19.941 { 00:18:19.941 "name": "BaseBdev4", 00:18:19.941 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:19.941 "is_configured": true, 00:18:19.941 "data_offset": 0, 00:18:19.941 "data_size": 65536 00:18:19.941 } 00:18:19.941 ] 00:18:19.941 }' 00:18:19.941 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.941 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.200 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.200 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.200 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.200 [2024-10-30 10:46:41.613794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.200 [2024-10-30 10:46:41.627797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:18:20.200 10:46:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.200 10:46:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:20.200 [2024-10-30 10:46:41.630575] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.576 "name": "raid_bdev1", 00:18:21.576 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:21.576 "strip_size_kb": 0, 00:18:21.576 "state": "online", 00:18:21.576 "raid_level": "raid1", 00:18:21.576 "superblock": false, 00:18:21.576 "num_base_bdevs": 4, 00:18:21.576 "num_base_bdevs_discovered": 4, 00:18:21.576 "num_base_bdevs_operational": 4, 00:18:21.576 "process": { 00:18:21.576 "type": "rebuild", 00:18:21.576 "target": "spare", 00:18:21.576 "progress": { 00:18:21.576 "blocks": 20480, 00:18:21.576 "percent": 31 00:18:21.576 } 00:18:21.576 }, 00:18:21.576 "base_bdevs_list": [ 00:18:21.576 { 00:18:21.576 "name": "spare", 00:18:21.576 "uuid": "8431756c-5fdd-5908-b12f-37f61f27c3d6", 00:18:21.576 "is_configured": true, 00:18:21.576 "data_offset": 0, 00:18:21.576 "data_size": 65536 00:18:21.576 }, 00:18:21.576 { 00:18:21.576 "name": "BaseBdev2", 00:18:21.576 "uuid": "87e85659-0ddb-5488-bd5c-8dfb930eb7c2", 00:18:21.576 "is_configured": true, 00:18:21.576 "data_offset": 0, 00:18:21.576 "data_size": 65536 00:18:21.576 }, 00:18:21.576 { 00:18:21.576 "name": "BaseBdev3", 00:18:21.576 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:21.576 "is_configured": true, 00:18:21.576 "data_offset": 0, 00:18:21.576 "data_size": 65536 00:18:21.576 }, 00:18:21.576 { 00:18:21.576 "name": "BaseBdev4", 00:18:21.576 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:21.576 "is_configured": true, 00:18:21.576 "data_offset": 0, 00:18:21.576 "data_size": 65536 00:18:21.576 } 00:18:21.576 ] 00:18:21.576 }' 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.576 [2024-10-30 10:46:42.804299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.576 [2024-10-30 10:46:42.839473] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:21.576 [2024-10-30 10:46:42.839595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.576 [2024-10-30 10:46:42.839627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:21.576 [2024-10-30 10:46:42.839641] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.576 "name": "raid_bdev1", 00:18:21.576 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:21.576 "strip_size_kb": 0, 00:18:21.576 "state": "online", 00:18:21.576 "raid_level": "raid1", 00:18:21.576 "superblock": false, 00:18:21.576 "num_base_bdevs": 4, 00:18:21.576 "num_base_bdevs_discovered": 3, 00:18:21.576 "num_base_bdevs_operational": 3, 00:18:21.576 "base_bdevs_list": [ 00:18:21.576 { 00:18:21.576 "name": null, 00:18:21.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.576 "is_configured": false, 00:18:21.576 "data_offset": 0, 00:18:21.576 "data_size": 65536 00:18:21.576 }, 00:18:21.576 { 00:18:21.576 "name": "BaseBdev2", 00:18:21.576 "uuid": "87e85659-0ddb-5488-bd5c-8dfb930eb7c2", 00:18:21.576 "is_configured": true, 00:18:21.576 "data_offset": 0, 00:18:21.576 "data_size": 65536 00:18:21.576 }, 00:18:21.576 { 00:18:21.576 "name": "BaseBdev3", 00:18:21.576 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:21.576 "is_configured": true, 00:18:21.576 "data_offset": 0, 00:18:21.576 "data_size": 65536 00:18:21.576 }, 00:18:21.576 { 00:18:21.576 "name": "BaseBdev4", 00:18:21.576 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:21.576 "is_configured": true, 00:18:21.576 "data_offset": 0, 00:18:21.576 "data_size": 65536 00:18:21.576 } 00:18:21.576 ] 00:18:21.576 }' 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.576 10:46:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.144 "name": "raid_bdev1", 00:18:22.144 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:22.144 "strip_size_kb": 0, 00:18:22.144 "state": "online", 00:18:22.144 "raid_level": "raid1", 00:18:22.144 "superblock": false, 00:18:22.144 "num_base_bdevs": 4, 00:18:22.144 "num_base_bdevs_discovered": 3, 00:18:22.144 "num_base_bdevs_operational": 3, 00:18:22.144 "base_bdevs_list": [ 00:18:22.144 { 00:18:22.144 "name": null, 00:18:22.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.144 "is_configured": false, 00:18:22.144 "data_offset": 0, 00:18:22.144 "data_size": 65536 00:18:22.144 }, 00:18:22.144 { 00:18:22.144 "name": "BaseBdev2", 00:18:22.144 "uuid": "87e85659-0ddb-5488-bd5c-8dfb930eb7c2", 00:18:22.144 "is_configured": true, 00:18:22.144 "data_offset": 0, 00:18:22.144 "data_size": 65536 00:18:22.144 }, 00:18:22.144 { 00:18:22.144 "name": "BaseBdev3", 00:18:22.144 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:22.144 "is_configured": true, 00:18:22.144 "data_offset": 0, 00:18:22.144 "data_size": 65536 00:18:22.144 }, 00:18:22.144 { 00:18:22.144 "name": "BaseBdev4", 00:18:22.144 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:22.144 "is_configured": true, 00:18:22.144 "data_offset": 0, 00:18:22.144 "data_size": 65536 00:18:22.144 } 00:18:22.144 ] 00:18:22.144 }' 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.144 [2024-10-30 10:46:43.542873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.144 [2024-10-30 10:46:43.556588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.144 10:46:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:22.144 [2024-10-30 10:46:43.559462] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.520 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.520 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.520 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.520 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.520 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.521 "name": "raid_bdev1", 00:18:23.521 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:23.521 "strip_size_kb": 0, 00:18:23.521 "state": "online", 00:18:23.521 "raid_level": "raid1", 00:18:23.521 "superblock": false, 00:18:23.521 "num_base_bdevs": 4, 00:18:23.521 "num_base_bdevs_discovered": 4, 00:18:23.521 "num_base_bdevs_operational": 4, 00:18:23.521 "process": { 00:18:23.521 "type": "rebuild", 00:18:23.521 "target": "spare", 00:18:23.521 "progress": { 00:18:23.521 "blocks": 20480, 00:18:23.521 "percent": 31 00:18:23.521 } 00:18:23.521 }, 00:18:23.521 "base_bdevs_list": [ 00:18:23.521 { 00:18:23.521 "name": "spare", 00:18:23.521 "uuid": "8431756c-5fdd-5908-b12f-37f61f27c3d6", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 }, 00:18:23.521 { 00:18:23.521 "name": "BaseBdev2", 00:18:23.521 "uuid": "87e85659-0ddb-5488-bd5c-8dfb930eb7c2", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 }, 00:18:23.521 { 00:18:23.521 "name": "BaseBdev3", 00:18:23.521 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 }, 00:18:23.521 { 00:18:23.521 "name": "BaseBdev4", 00:18:23.521 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 } 00:18:23.521 ] 00:18:23.521 }' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.521 [2024-10-30 10:46:44.748896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:23.521 [2024-10-30 10:46:44.767964] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.521 "name": "raid_bdev1", 00:18:23.521 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:23.521 "strip_size_kb": 0, 00:18:23.521 "state": "online", 00:18:23.521 "raid_level": "raid1", 00:18:23.521 "superblock": false, 00:18:23.521 "num_base_bdevs": 4, 00:18:23.521 "num_base_bdevs_discovered": 3, 00:18:23.521 "num_base_bdevs_operational": 3, 00:18:23.521 "process": { 00:18:23.521 "type": "rebuild", 00:18:23.521 "target": "spare", 00:18:23.521 "progress": { 00:18:23.521 "blocks": 24576, 00:18:23.521 "percent": 37 00:18:23.521 } 00:18:23.521 }, 00:18:23.521 "base_bdevs_list": [ 00:18:23.521 { 00:18:23.521 "name": "spare", 00:18:23.521 "uuid": "8431756c-5fdd-5908-b12f-37f61f27c3d6", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 }, 00:18:23.521 { 00:18:23.521 "name": null, 00:18:23.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.521 "is_configured": false, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 }, 00:18:23.521 { 00:18:23.521 "name": "BaseBdev3", 00:18:23.521 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 }, 00:18:23.521 { 00:18:23.521 "name": "BaseBdev4", 00:18:23.521 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 } 00:18:23.521 ] 00:18:23.521 }' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=478 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.521 "name": "raid_bdev1", 00:18:23.521 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:23.521 "strip_size_kb": 0, 00:18:23.521 "state": "online", 00:18:23.521 "raid_level": "raid1", 00:18:23.521 "superblock": false, 00:18:23.521 "num_base_bdevs": 4, 00:18:23.521 "num_base_bdevs_discovered": 3, 00:18:23.521 "num_base_bdevs_operational": 3, 00:18:23.521 "process": { 00:18:23.521 "type": "rebuild", 00:18:23.521 "target": "spare", 00:18:23.521 "progress": { 00:18:23.521 "blocks": 26624, 00:18:23.521 "percent": 40 00:18:23.521 } 00:18:23.521 }, 00:18:23.521 "base_bdevs_list": [ 00:18:23.521 { 00:18:23.521 "name": "spare", 00:18:23.521 "uuid": "8431756c-5fdd-5908-b12f-37f61f27c3d6", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 }, 00:18:23.521 { 00:18:23.521 "name": null, 00:18:23.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.521 "is_configured": false, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 }, 00:18:23.521 { 00:18:23.521 "name": "BaseBdev3", 00:18:23.521 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 }, 00:18:23.521 { 00:18:23.521 "name": "BaseBdev4", 00:18:23.521 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:23.521 "is_configured": true, 00:18:23.521 "data_offset": 0, 00:18:23.521 "data_size": 65536 00:18:23.521 } 00:18:23.521 ] 00:18:23.521 }' 00:18:23.521 10:46:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.779 10:46:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.779 10:46:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.779 10:46:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.779 10:46:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.714 "name": "raid_bdev1", 00:18:24.714 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:24.714 "strip_size_kb": 0, 00:18:24.714 "state": "online", 00:18:24.714 "raid_level": "raid1", 00:18:24.714 "superblock": false, 00:18:24.714 "num_base_bdevs": 4, 00:18:24.714 "num_base_bdevs_discovered": 3, 00:18:24.714 "num_base_bdevs_operational": 3, 00:18:24.714 "process": { 00:18:24.714 "type": "rebuild", 00:18:24.714 "target": "spare", 00:18:24.714 "progress": { 00:18:24.714 "blocks": 51200, 00:18:24.714 "percent": 78 00:18:24.714 } 00:18:24.714 }, 00:18:24.714 "base_bdevs_list": [ 00:18:24.714 { 00:18:24.714 "name": "spare", 00:18:24.714 "uuid": "8431756c-5fdd-5908-b12f-37f61f27c3d6", 00:18:24.714 "is_configured": true, 00:18:24.714 "data_offset": 0, 00:18:24.714 "data_size": 65536 00:18:24.714 }, 00:18:24.714 { 00:18:24.714 "name": null, 00:18:24.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.714 "is_configured": false, 00:18:24.714 "data_offset": 0, 00:18:24.714 "data_size": 65536 00:18:24.714 }, 00:18:24.714 { 00:18:24.714 "name": "BaseBdev3", 00:18:24.714 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:24.714 "is_configured": true, 00:18:24.714 "data_offset": 0, 00:18:24.714 "data_size": 65536 00:18:24.714 }, 00:18:24.714 { 00:18:24.714 "name": "BaseBdev4", 00:18:24.714 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:24.714 "is_configured": true, 00:18:24.714 "data_offset": 0, 00:18:24.714 "data_size": 65536 00:18:24.714 } 00:18:24.714 ] 00:18:24.714 }' 00:18:24.714 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.974 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.974 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.974 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.974 10:46:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.561 [2024-10-30 10:46:46.783425] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:25.561 [2024-10-30 10:46:46.783533] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:25.561 [2024-10-30 10:46:46.783613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.833 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.091 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.091 "name": "raid_bdev1", 00:18:26.091 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:26.092 "strip_size_kb": 0, 00:18:26.092 "state": "online", 00:18:26.092 "raid_level": "raid1", 00:18:26.092 "superblock": false, 00:18:26.092 "num_base_bdevs": 4, 00:18:26.092 "num_base_bdevs_discovered": 3, 00:18:26.092 "num_base_bdevs_operational": 3, 00:18:26.092 "base_bdevs_list": [ 00:18:26.092 { 00:18:26.092 "name": "spare", 00:18:26.092 "uuid": "8431756c-5fdd-5908-b12f-37f61f27c3d6", 00:18:26.092 "is_configured": true, 00:18:26.092 "data_offset": 0, 00:18:26.092 "data_size": 65536 00:18:26.092 }, 00:18:26.092 { 00:18:26.092 "name": null, 00:18:26.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.092 "is_configured": false, 00:18:26.092 "data_offset": 0, 00:18:26.092 "data_size": 65536 00:18:26.092 }, 00:18:26.092 { 00:18:26.092 "name": "BaseBdev3", 00:18:26.092 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:26.092 "is_configured": true, 00:18:26.092 "data_offset": 0, 00:18:26.092 "data_size": 65536 00:18:26.092 }, 00:18:26.092 { 00:18:26.092 "name": "BaseBdev4", 00:18:26.092 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:26.092 "is_configured": true, 00:18:26.092 "data_offset": 0, 00:18:26.092 "data_size": 65536 00:18:26.092 } 00:18:26.092 ] 00:18:26.092 }' 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.092 "name": "raid_bdev1", 00:18:26.092 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:26.092 "strip_size_kb": 0, 00:18:26.092 "state": "online", 00:18:26.092 "raid_level": "raid1", 00:18:26.092 "superblock": false, 00:18:26.092 "num_base_bdevs": 4, 00:18:26.092 "num_base_bdevs_discovered": 3, 00:18:26.092 "num_base_bdevs_operational": 3, 00:18:26.092 "base_bdevs_list": [ 00:18:26.092 { 00:18:26.092 "name": "spare", 00:18:26.092 "uuid": "8431756c-5fdd-5908-b12f-37f61f27c3d6", 00:18:26.092 "is_configured": true, 00:18:26.092 "data_offset": 0, 00:18:26.092 "data_size": 65536 00:18:26.092 }, 00:18:26.092 { 00:18:26.092 "name": null, 00:18:26.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.092 "is_configured": false, 00:18:26.092 "data_offset": 0, 00:18:26.092 "data_size": 65536 00:18:26.092 }, 00:18:26.092 { 00:18:26.092 "name": "BaseBdev3", 00:18:26.092 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:26.092 "is_configured": true, 00:18:26.092 "data_offset": 0, 00:18:26.092 "data_size": 65536 00:18:26.092 }, 00:18:26.092 { 00:18:26.092 "name": "BaseBdev4", 00:18:26.092 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:26.092 "is_configured": true, 00:18:26.092 "data_offset": 0, 00:18:26.092 "data_size": 65536 00:18:26.092 } 00:18:26.092 ] 00:18:26.092 }' 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.092 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.351 "name": "raid_bdev1", 00:18:26.351 "uuid": "c26efd45-91f2-4ec3-810a-c387d1c7a437", 00:18:26.351 "strip_size_kb": 0, 00:18:26.351 "state": "online", 00:18:26.351 "raid_level": "raid1", 00:18:26.351 "superblock": false, 00:18:26.351 "num_base_bdevs": 4, 00:18:26.351 "num_base_bdevs_discovered": 3, 00:18:26.351 "num_base_bdevs_operational": 3, 00:18:26.351 "base_bdevs_list": [ 00:18:26.351 { 00:18:26.351 "name": "spare", 00:18:26.351 "uuid": "8431756c-5fdd-5908-b12f-37f61f27c3d6", 00:18:26.351 "is_configured": true, 00:18:26.351 "data_offset": 0, 00:18:26.351 "data_size": 65536 00:18:26.351 }, 00:18:26.351 { 00:18:26.351 "name": null, 00:18:26.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.351 "is_configured": false, 00:18:26.351 "data_offset": 0, 00:18:26.351 "data_size": 65536 00:18:26.351 }, 00:18:26.351 { 00:18:26.351 "name": "BaseBdev3", 00:18:26.351 "uuid": "b20445e3-e92f-58b8-a842-8b3f22b39b21", 00:18:26.351 "is_configured": true, 00:18:26.351 "data_offset": 0, 00:18:26.351 "data_size": 65536 00:18:26.351 }, 00:18:26.351 { 00:18:26.351 "name": "BaseBdev4", 00:18:26.351 "uuid": "5a6e16b8-16c6-5fc1-85e1-c850058046bb", 00:18:26.351 "is_configured": true, 00:18:26.351 "data_offset": 0, 00:18:26.351 "data_size": 65536 00:18:26.351 } 00:18:26.351 ] 00:18:26.351 }' 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.351 10:46:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.610 10:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:26.610 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.610 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.610 [2024-10-30 10:46:48.067952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.610 [2024-10-30 10:46:48.068003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.610 [2024-10-30 10:46:48.068102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.610 [2024-10-30 10:46:48.068212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.610 [2024-10-30 10:46:48.068229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:26.610 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.610 10:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:26.610 10:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.610 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.610 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:26.869 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:27.127 /dev/nbd0 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.127 1+0 records in 00:18:27.127 1+0 records out 00:18:27.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510839 s, 8.0 MB/s 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:27.127 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:27.386 /dev/nbd1 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.386 1+0 records in 00:18:27.386 1+0 records out 00:18:27.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313248 s, 13.1 MB/s 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:27.386 10:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:27.645 10:46:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:27.645 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.645 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:27.645 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.645 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:27.645 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.645 10:46:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.904 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77941 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77941 ']' 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77941 00:18:28.162 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:18:28.420 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.420 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77941 00:18:28.420 killing process with pid 77941 00:18:28.420 Received shutdown signal, test time was about 60.000000 seconds 00:18:28.420 00:18:28.420 Latency(us) 00:18:28.420 [2024-10-30T10:46:49.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.420 [2024-10-30T10:46:49.890Z] =================================================================================================================== 00:18:28.420 [2024-10-30T10:46:49.890Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.420 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:28.420 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:28.420 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77941' 00:18:28.420 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77941 00:18:28.420 [2024-10-30 10:46:49.659027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.420 10:46:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77941 00:18:28.678 [2024-10-30 10:46:50.072766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.626 10:46:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:29.626 00:18:29.626 real 0m20.761s 00:18:29.626 user 0m23.897s 00:18:29.626 sys 0m3.487s 00:18:29.626 10:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:29.626 ************************************ 00:18:29.626 END TEST raid_rebuild_test 00:18:29.626 ************************************ 00:18:29.626 10:46:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.918 10:46:51 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:18:29.918 10:46:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:29.918 10:46:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:29.918 10:46:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.918 ************************************ 00:18:29.918 START TEST raid_rebuild_test_sb 00:18:29.918 ************************************ 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78422 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78422 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78422 ']' 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:29.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:29.918 10:46:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.918 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:29.918 Zero copy mechanism will not be used. 00:18:29.918 [2024-10-30 10:46:51.212376] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:18:29.918 [2024-10-30 10:46:51.212539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78422 ] 00:18:29.918 [2024-10-30 10:46:51.386783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.177 [2024-10-30 10:46:51.522782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.436 [2024-10-30 10:46:51.719993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.436 [2024-10-30 10:46:51.720087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.002 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:31.002 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:31.002 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.002 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.003 BaseBdev1_malloc 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.003 [2024-10-30 10:46:52.322662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:31.003 [2024-10-30 10:46:52.322784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.003 [2024-10-30 10:46:52.322823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:31.003 [2024-10-30 10:46:52.322843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.003 [2024-10-30 10:46:52.325723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.003 [2024-10-30 10:46:52.325788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:31.003 BaseBdev1 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.003 BaseBdev2_malloc 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.003 [2024-10-30 10:46:52.373277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:31.003 [2024-10-30 10:46:52.373398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.003 [2024-10-30 10:46:52.373426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:31.003 [2024-10-30 10:46:52.373445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.003 [2024-10-30 10:46:52.376320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.003 [2024-10-30 10:46:52.376386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:31.003 BaseBdev2 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.003 BaseBdev3_malloc 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.003 [2024-10-30 10:46:52.435236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:31.003 [2024-10-30 10:46:52.435324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.003 [2024-10-30 10:46:52.435356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:31.003 [2024-10-30 10:46:52.435376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.003 [2024-10-30 10:46:52.437940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.003 [2024-10-30 10:46:52.438046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:31.003 BaseBdev3 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.003 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.262 BaseBdev4_malloc 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.262 [2024-10-30 10:46:52.486406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:31.262 [2024-10-30 10:46:52.486491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.262 [2024-10-30 10:46:52.486518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:31.262 [2024-10-30 10:46:52.486536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.262 [2024-10-30 10:46:52.489344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.262 [2024-10-30 10:46:52.489445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:31.262 BaseBdev4 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.262 spare_malloc 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.262 spare_delay 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.262 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.262 [2024-10-30 10:46:52.546539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:31.262 [2024-10-30 10:46:52.546645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.262 [2024-10-30 10:46:52.546674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:31.263 [2024-10-30 10:46:52.546692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.263 [2024-10-30 10:46:52.549686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.263 [2024-10-30 10:46:52.549752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:31.263 spare 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.263 [2024-10-30 10:46:52.554666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.263 [2024-10-30 10:46:52.557346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.263 [2024-10-30 10:46:52.557492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.263 [2024-10-30 10:46:52.557569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:31.263 [2024-10-30 10:46:52.557837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:31.263 [2024-10-30 10:46:52.557872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:31.263 [2024-10-30 10:46:52.558222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:31.263 [2024-10-30 10:46:52.558467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:31.263 [2024-10-30 10:46:52.558491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:31.263 [2024-10-30 10:46:52.558758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.263 "name": "raid_bdev1", 00:18:31.263 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:31.263 "strip_size_kb": 0, 00:18:31.263 "state": "online", 00:18:31.263 "raid_level": "raid1", 00:18:31.263 "superblock": true, 00:18:31.263 "num_base_bdevs": 4, 00:18:31.263 "num_base_bdevs_discovered": 4, 00:18:31.263 "num_base_bdevs_operational": 4, 00:18:31.263 "base_bdevs_list": [ 00:18:31.263 { 00:18:31.263 "name": "BaseBdev1", 00:18:31.263 "uuid": "a312f4f8-fda9-5eb6-b387-5fa1af8eedd7", 00:18:31.263 "is_configured": true, 00:18:31.263 "data_offset": 2048, 00:18:31.263 "data_size": 63488 00:18:31.263 }, 00:18:31.263 { 00:18:31.263 "name": "BaseBdev2", 00:18:31.263 "uuid": "eb4fe31b-9b2d-585c-b361-8f49a888332c", 00:18:31.263 "is_configured": true, 00:18:31.263 "data_offset": 2048, 00:18:31.263 "data_size": 63488 00:18:31.263 }, 00:18:31.263 { 00:18:31.263 "name": "BaseBdev3", 00:18:31.263 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:31.263 "is_configured": true, 00:18:31.263 "data_offset": 2048, 00:18:31.263 "data_size": 63488 00:18:31.263 }, 00:18:31.263 { 00:18:31.263 "name": "BaseBdev4", 00:18:31.263 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:31.263 "is_configured": true, 00:18:31.263 "data_offset": 2048, 00:18:31.263 "data_size": 63488 00:18:31.263 } 00:18:31.263 ] 00:18:31.263 }' 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.263 10:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.834 [2024-10-30 10:46:53.047313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:31.834 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:32.094 [2024-10-30 10:46:53.386992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:32.094 /dev/nbd0 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.094 1+0 records in 00:18:32.094 1+0 records out 00:18:32.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354967 s, 11.5 MB/s 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:32.094 10:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:40.211 63488+0 records in 00:18:40.211 63488+0 records out 00:18:40.211 32505856 bytes (33 MB, 31 MiB) copied, 8.17155 s, 4.0 MB/s 00:18:40.211 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:40.211 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.211 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.211 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.211 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:40.211 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.211 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:40.470 [2024-10-30 10:47:01.904689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.470 [2024-10-30 10:47:01.932761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.470 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.729 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.729 10:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.729 10:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.729 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.729 10:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.729 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.729 "name": "raid_bdev1", 00:18:40.729 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:40.729 "strip_size_kb": 0, 00:18:40.729 "state": "online", 00:18:40.729 "raid_level": "raid1", 00:18:40.729 "superblock": true, 00:18:40.729 "num_base_bdevs": 4, 00:18:40.729 "num_base_bdevs_discovered": 3, 00:18:40.729 "num_base_bdevs_operational": 3, 00:18:40.729 "base_bdevs_list": [ 00:18:40.729 { 00:18:40.729 "name": null, 00:18:40.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.729 "is_configured": false, 00:18:40.729 "data_offset": 0, 00:18:40.729 "data_size": 63488 00:18:40.729 }, 00:18:40.729 { 00:18:40.729 "name": "BaseBdev2", 00:18:40.729 "uuid": "eb4fe31b-9b2d-585c-b361-8f49a888332c", 00:18:40.729 "is_configured": true, 00:18:40.729 "data_offset": 2048, 00:18:40.729 "data_size": 63488 00:18:40.729 }, 00:18:40.729 { 00:18:40.729 "name": "BaseBdev3", 00:18:40.729 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:40.729 "is_configured": true, 00:18:40.729 "data_offset": 2048, 00:18:40.729 "data_size": 63488 00:18:40.729 }, 00:18:40.729 { 00:18:40.730 "name": "BaseBdev4", 00:18:40.730 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:40.730 "is_configured": true, 00:18:40.730 "data_offset": 2048, 00:18:40.730 "data_size": 63488 00:18:40.730 } 00:18:40.730 ] 00:18:40.730 }' 00:18:40.730 10:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.730 10:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.997 10:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:40.997 10:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.997 10:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.997 [2024-10-30 10:47:02.432914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.997 [2024-10-30 10:47:02.447711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:18:40.997 10:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.997 10:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:40.997 [2024-10-30 10:47:02.450224] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.406 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.406 "name": "raid_bdev1", 00:18:42.406 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:42.406 "strip_size_kb": 0, 00:18:42.406 "state": "online", 00:18:42.406 "raid_level": "raid1", 00:18:42.406 "superblock": true, 00:18:42.406 "num_base_bdevs": 4, 00:18:42.406 "num_base_bdevs_discovered": 4, 00:18:42.406 "num_base_bdevs_operational": 4, 00:18:42.406 "process": { 00:18:42.406 "type": "rebuild", 00:18:42.406 "target": "spare", 00:18:42.406 "progress": { 00:18:42.406 "blocks": 20480, 00:18:42.406 "percent": 32 00:18:42.406 } 00:18:42.406 }, 00:18:42.406 "base_bdevs_list": [ 00:18:42.406 { 00:18:42.406 "name": "spare", 00:18:42.406 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:42.406 "is_configured": true, 00:18:42.406 "data_offset": 2048, 00:18:42.406 "data_size": 63488 00:18:42.406 }, 00:18:42.406 { 00:18:42.406 "name": "BaseBdev2", 00:18:42.406 "uuid": "eb4fe31b-9b2d-585c-b361-8f49a888332c", 00:18:42.406 "is_configured": true, 00:18:42.406 "data_offset": 2048, 00:18:42.406 "data_size": 63488 00:18:42.406 }, 00:18:42.406 { 00:18:42.406 "name": "BaseBdev3", 00:18:42.406 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:42.407 "is_configured": true, 00:18:42.407 "data_offset": 2048, 00:18:42.407 "data_size": 63488 00:18:42.407 }, 00:18:42.407 { 00:18:42.407 "name": "BaseBdev4", 00:18:42.407 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:42.407 "is_configured": true, 00:18:42.407 "data_offset": 2048, 00:18:42.407 "data_size": 63488 00:18:42.407 } 00:18:42.407 ] 00:18:42.407 }' 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.407 [2024-10-30 10:47:03.615615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.407 [2024-10-30 10:47:03.658537] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:42.407 [2024-10-30 10:47:03.658631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.407 [2024-10-30 10:47:03.658658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.407 [2024-10-30 10:47:03.658672] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.407 "name": "raid_bdev1", 00:18:42.407 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:42.407 "strip_size_kb": 0, 00:18:42.407 "state": "online", 00:18:42.407 "raid_level": "raid1", 00:18:42.407 "superblock": true, 00:18:42.407 "num_base_bdevs": 4, 00:18:42.407 "num_base_bdevs_discovered": 3, 00:18:42.407 "num_base_bdevs_operational": 3, 00:18:42.407 "base_bdevs_list": [ 00:18:42.407 { 00:18:42.407 "name": null, 00:18:42.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.407 "is_configured": false, 00:18:42.407 "data_offset": 0, 00:18:42.407 "data_size": 63488 00:18:42.407 }, 00:18:42.407 { 00:18:42.407 "name": "BaseBdev2", 00:18:42.407 "uuid": "eb4fe31b-9b2d-585c-b361-8f49a888332c", 00:18:42.407 "is_configured": true, 00:18:42.407 "data_offset": 2048, 00:18:42.407 "data_size": 63488 00:18:42.407 }, 00:18:42.407 { 00:18:42.407 "name": "BaseBdev3", 00:18:42.407 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:42.407 "is_configured": true, 00:18:42.407 "data_offset": 2048, 00:18:42.407 "data_size": 63488 00:18:42.407 }, 00:18:42.407 { 00:18:42.407 "name": "BaseBdev4", 00:18:42.407 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:42.407 "is_configured": true, 00:18:42.407 "data_offset": 2048, 00:18:42.407 "data_size": 63488 00:18:42.407 } 00:18:42.407 ] 00:18:42.407 }' 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.407 10:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.976 "name": "raid_bdev1", 00:18:42.976 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:42.976 "strip_size_kb": 0, 00:18:42.976 "state": "online", 00:18:42.976 "raid_level": "raid1", 00:18:42.976 "superblock": true, 00:18:42.976 "num_base_bdevs": 4, 00:18:42.976 "num_base_bdevs_discovered": 3, 00:18:42.976 "num_base_bdevs_operational": 3, 00:18:42.976 "base_bdevs_list": [ 00:18:42.976 { 00:18:42.976 "name": null, 00:18:42.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.976 "is_configured": false, 00:18:42.976 "data_offset": 0, 00:18:42.976 "data_size": 63488 00:18:42.976 }, 00:18:42.976 { 00:18:42.976 "name": "BaseBdev2", 00:18:42.976 "uuid": "eb4fe31b-9b2d-585c-b361-8f49a888332c", 00:18:42.976 "is_configured": true, 00:18:42.976 "data_offset": 2048, 00:18:42.976 "data_size": 63488 00:18:42.976 }, 00:18:42.976 { 00:18:42.976 "name": "BaseBdev3", 00:18:42.976 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:42.976 "is_configured": true, 00:18:42.976 "data_offset": 2048, 00:18:42.976 "data_size": 63488 00:18:42.976 }, 00:18:42.976 { 00:18:42.976 "name": "BaseBdev4", 00:18:42.976 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:42.976 "is_configured": true, 00:18:42.976 "data_offset": 2048, 00:18:42.976 "data_size": 63488 00:18:42.976 } 00:18:42.976 ] 00:18:42.976 }' 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.976 [2024-10-30 10:47:04.329784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.976 [2024-10-30 10:47:04.343792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.976 10:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:42.976 [2024-10-30 10:47:04.346460] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.914 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.174 "name": "raid_bdev1", 00:18:44.174 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:44.174 "strip_size_kb": 0, 00:18:44.174 "state": "online", 00:18:44.174 "raid_level": "raid1", 00:18:44.174 "superblock": true, 00:18:44.174 "num_base_bdevs": 4, 00:18:44.174 "num_base_bdevs_discovered": 4, 00:18:44.174 "num_base_bdevs_operational": 4, 00:18:44.174 "process": { 00:18:44.174 "type": "rebuild", 00:18:44.174 "target": "spare", 00:18:44.174 "progress": { 00:18:44.174 "blocks": 20480, 00:18:44.174 "percent": 32 00:18:44.174 } 00:18:44.174 }, 00:18:44.174 "base_bdevs_list": [ 00:18:44.174 { 00:18:44.174 "name": "spare", 00:18:44.174 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:44.174 "is_configured": true, 00:18:44.174 "data_offset": 2048, 00:18:44.174 "data_size": 63488 00:18:44.174 }, 00:18:44.174 { 00:18:44.174 "name": "BaseBdev2", 00:18:44.174 "uuid": "eb4fe31b-9b2d-585c-b361-8f49a888332c", 00:18:44.174 "is_configured": true, 00:18:44.174 "data_offset": 2048, 00:18:44.174 "data_size": 63488 00:18:44.174 }, 00:18:44.174 { 00:18:44.174 "name": "BaseBdev3", 00:18:44.174 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:44.174 "is_configured": true, 00:18:44.174 "data_offset": 2048, 00:18:44.174 "data_size": 63488 00:18:44.174 }, 00:18:44.174 { 00:18:44.174 "name": "BaseBdev4", 00:18:44.174 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:44.174 "is_configured": true, 00:18:44.174 "data_offset": 2048, 00:18:44.174 "data_size": 63488 00:18:44.174 } 00:18:44.174 ] 00:18:44.174 }' 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:44.174 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.174 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.174 [2024-10-30 10:47:05.511729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:44.433 [2024-10-30 10:47:05.654784] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.434 "name": "raid_bdev1", 00:18:44.434 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:44.434 "strip_size_kb": 0, 00:18:44.434 "state": "online", 00:18:44.434 "raid_level": "raid1", 00:18:44.434 "superblock": true, 00:18:44.434 "num_base_bdevs": 4, 00:18:44.434 "num_base_bdevs_discovered": 3, 00:18:44.434 "num_base_bdevs_operational": 3, 00:18:44.434 "process": { 00:18:44.434 "type": "rebuild", 00:18:44.434 "target": "spare", 00:18:44.434 "progress": { 00:18:44.434 "blocks": 24576, 00:18:44.434 "percent": 38 00:18:44.434 } 00:18:44.434 }, 00:18:44.434 "base_bdevs_list": [ 00:18:44.434 { 00:18:44.434 "name": "spare", 00:18:44.434 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:44.434 "is_configured": true, 00:18:44.434 "data_offset": 2048, 00:18:44.434 "data_size": 63488 00:18:44.434 }, 00:18:44.434 { 00:18:44.434 "name": null, 00:18:44.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.434 "is_configured": false, 00:18:44.434 "data_offset": 0, 00:18:44.434 "data_size": 63488 00:18:44.434 }, 00:18:44.434 { 00:18:44.434 "name": "BaseBdev3", 00:18:44.434 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:44.434 "is_configured": true, 00:18:44.434 "data_offset": 2048, 00:18:44.434 "data_size": 63488 00:18:44.434 }, 00:18:44.434 { 00:18:44.434 "name": "BaseBdev4", 00:18:44.434 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:44.434 "is_configured": true, 00:18:44.434 "data_offset": 2048, 00:18:44.434 "data_size": 63488 00:18:44.434 } 00:18:44.434 ] 00:18:44.434 }' 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=499 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.434 "name": "raid_bdev1", 00:18:44.434 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:44.434 "strip_size_kb": 0, 00:18:44.434 "state": "online", 00:18:44.434 "raid_level": "raid1", 00:18:44.434 "superblock": true, 00:18:44.434 "num_base_bdevs": 4, 00:18:44.434 "num_base_bdevs_discovered": 3, 00:18:44.434 "num_base_bdevs_operational": 3, 00:18:44.434 "process": { 00:18:44.434 "type": "rebuild", 00:18:44.434 "target": "spare", 00:18:44.434 "progress": { 00:18:44.434 "blocks": 26624, 00:18:44.434 "percent": 41 00:18:44.434 } 00:18:44.434 }, 00:18:44.434 "base_bdevs_list": [ 00:18:44.434 { 00:18:44.434 "name": "spare", 00:18:44.434 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:44.434 "is_configured": true, 00:18:44.434 "data_offset": 2048, 00:18:44.434 "data_size": 63488 00:18:44.434 }, 00:18:44.434 { 00:18:44.434 "name": null, 00:18:44.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.434 "is_configured": false, 00:18:44.434 "data_offset": 0, 00:18:44.434 "data_size": 63488 00:18:44.434 }, 00:18:44.434 { 00:18:44.434 "name": "BaseBdev3", 00:18:44.434 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:44.434 "is_configured": true, 00:18:44.434 "data_offset": 2048, 00:18:44.434 "data_size": 63488 00:18:44.434 }, 00:18:44.434 { 00:18:44.434 "name": "BaseBdev4", 00:18:44.434 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:44.434 "is_configured": true, 00:18:44.434 "data_offset": 2048, 00:18:44.434 "data_size": 63488 00:18:44.434 } 00:18:44.434 ] 00:18:44.434 }' 00:18:44.434 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.693 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.693 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.693 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.693 10:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.631 10:47:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.631 10:47:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.631 10:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.631 "name": "raid_bdev1", 00:18:45.631 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:45.631 "strip_size_kb": 0, 00:18:45.631 "state": "online", 00:18:45.631 "raid_level": "raid1", 00:18:45.631 "superblock": true, 00:18:45.631 "num_base_bdevs": 4, 00:18:45.631 "num_base_bdevs_discovered": 3, 00:18:45.631 "num_base_bdevs_operational": 3, 00:18:45.631 "process": { 00:18:45.631 "type": "rebuild", 00:18:45.631 "target": "spare", 00:18:45.631 "progress": { 00:18:45.631 "blocks": 51200, 00:18:45.631 "percent": 80 00:18:45.631 } 00:18:45.631 }, 00:18:45.631 "base_bdevs_list": [ 00:18:45.631 { 00:18:45.631 "name": "spare", 00:18:45.631 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:45.631 "is_configured": true, 00:18:45.631 "data_offset": 2048, 00:18:45.631 "data_size": 63488 00:18:45.631 }, 00:18:45.631 { 00:18:45.631 "name": null, 00:18:45.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.631 "is_configured": false, 00:18:45.631 "data_offset": 0, 00:18:45.631 "data_size": 63488 00:18:45.631 }, 00:18:45.631 { 00:18:45.631 "name": "BaseBdev3", 00:18:45.631 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:45.631 "is_configured": true, 00:18:45.631 "data_offset": 2048, 00:18:45.631 "data_size": 63488 00:18:45.631 }, 00:18:45.631 { 00:18:45.631 "name": "BaseBdev4", 00:18:45.631 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:45.631 "is_configured": true, 00:18:45.631 "data_offset": 2048, 00:18:45.631 "data_size": 63488 00:18:45.631 } 00:18:45.631 ] 00:18:45.631 }' 00:18:45.631 10:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.631 10:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.631 10:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.891 10:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.891 10:47:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:46.150 [2024-10-30 10:47:07.568410] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:46.150 [2024-10-30 10:47:07.568529] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:46.150 [2024-10-30 10:47:07.568691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.720 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.979 "name": "raid_bdev1", 00:18:46.979 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:46.979 "strip_size_kb": 0, 00:18:46.979 "state": "online", 00:18:46.979 "raid_level": "raid1", 00:18:46.979 "superblock": true, 00:18:46.979 "num_base_bdevs": 4, 00:18:46.979 "num_base_bdevs_discovered": 3, 00:18:46.979 "num_base_bdevs_operational": 3, 00:18:46.979 "base_bdevs_list": [ 00:18:46.979 { 00:18:46.979 "name": "spare", 00:18:46.979 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:46.979 "is_configured": true, 00:18:46.979 "data_offset": 2048, 00:18:46.979 "data_size": 63488 00:18:46.979 }, 00:18:46.979 { 00:18:46.979 "name": null, 00:18:46.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.979 "is_configured": false, 00:18:46.979 "data_offset": 0, 00:18:46.979 "data_size": 63488 00:18:46.979 }, 00:18:46.979 { 00:18:46.979 "name": "BaseBdev3", 00:18:46.979 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:46.979 "is_configured": true, 00:18:46.979 "data_offset": 2048, 00:18:46.979 "data_size": 63488 00:18:46.979 }, 00:18:46.979 { 00:18:46.979 "name": "BaseBdev4", 00:18:46.979 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:46.979 "is_configured": true, 00:18:46.979 "data_offset": 2048, 00:18:46.979 "data_size": 63488 00:18:46.979 } 00:18:46.979 ] 00:18:46.979 }' 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.979 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.980 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.980 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.980 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.980 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.980 "name": "raid_bdev1", 00:18:46.980 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:46.980 "strip_size_kb": 0, 00:18:46.980 "state": "online", 00:18:46.980 "raid_level": "raid1", 00:18:46.980 "superblock": true, 00:18:46.980 "num_base_bdevs": 4, 00:18:46.980 "num_base_bdevs_discovered": 3, 00:18:46.980 "num_base_bdevs_operational": 3, 00:18:46.980 "base_bdevs_list": [ 00:18:46.980 { 00:18:46.980 "name": "spare", 00:18:46.980 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:46.980 "is_configured": true, 00:18:46.980 "data_offset": 2048, 00:18:46.980 "data_size": 63488 00:18:46.980 }, 00:18:46.980 { 00:18:46.980 "name": null, 00:18:46.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.980 "is_configured": false, 00:18:46.980 "data_offset": 0, 00:18:46.980 "data_size": 63488 00:18:46.980 }, 00:18:46.980 { 00:18:46.980 "name": "BaseBdev3", 00:18:46.980 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:46.980 "is_configured": true, 00:18:46.980 "data_offset": 2048, 00:18:46.980 "data_size": 63488 00:18:46.980 }, 00:18:46.980 { 00:18:46.980 "name": "BaseBdev4", 00:18:46.980 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:46.980 "is_configured": true, 00:18:46.980 "data_offset": 2048, 00:18:46.980 "data_size": 63488 00:18:46.980 } 00:18:46.980 ] 00:18:46.980 }' 00:18:46.980 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.980 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:46.980 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.239 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.239 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:47.239 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.239 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.239 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.239 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.239 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.239 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.240 "name": "raid_bdev1", 00:18:47.240 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:47.240 "strip_size_kb": 0, 00:18:47.240 "state": "online", 00:18:47.240 "raid_level": "raid1", 00:18:47.240 "superblock": true, 00:18:47.240 "num_base_bdevs": 4, 00:18:47.240 "num_base_bdevs_discovered": 3, 00:18:47.240 "num_base_bdevs_operational": 3, 00:18:47.240 "base_bdevs_list": [ 00:18:47.240 { 00:18:47.240 "name": "spare", 00:18:47.240 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:47.240 "is_configured": true, 00:18:47.240 "data_offset": 2048, 00:18:47.240 "data_size": 63488 00:18:47.240 }, 00:18:47.240 { 00:18:47.240 "name": null, 00:18:47.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.240 "is_configured": false, 00:18:47.240 "data_offset": 0, 00:18:47.240 "data_size": 63488 00:18:47.240 }, 00:18:47.240 { 00:18:47.240 "name": "BaseBdev3", 00:18:47.240 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:47.240 "is_configured": true, 00:18:47.240 "data_offset": 2048, 00:18:47.240 "data_size": 63488 00:18:47.240 }, 00:18:47.240 { 00:18:47.240 "name": "BaseBdev4", 00:18:47.240 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:47.240 "is_configured": true, 00:18:47.240 "data_offset": 2048, 00:18:47.240 "data_size": 63488 00:18:47.240 } 00:18:47.240 ] 00:18:47.240 }' 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.240 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.499 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:47.499 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.499 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.499 [2024-10-30 10:47:08.957226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:47.499 [2024-10-30 10:47:08.957286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.499 [2024-10-30 10:47:08.957455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.499 [2024-10-30 10:47:08.957598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.499 [2024-10-30 10:47:08.957616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:47.499 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.500 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.500 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.500 10:47:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:47.500 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.759 10:47:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:47.759 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:48.021 /dev/nbd0 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.021 1+0 records in 00:18:48.021 1+0 records out 00:18:48.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327238 s, 12.5 MB/s 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.021 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:48.285 /dev/nbd1 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:48.285 1+0 records in 00:18:48.285 1+0 records out 00:18:48.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359141 s, 11.4 MB/s 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:48.285 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:48.544 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:48.544 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:48.544 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:48.544 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.544 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:48.544 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.544 10:47:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.802 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.061 [2024-10-30 10:47:10.522717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.061 [2024-10-30 10:47:10.522791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.061 [2024-10-30 10:47:10.522833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:49.061 [2024-10-30 10:47:10.522848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.061 [2024-10-30 10:47:10.525878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.061 [2024-10-30 10:47:10.525950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.061 [2024-10-30 10:47:10.526111] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:49.061 [2024-10-30 10:47:10.526193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:49.061 [2024-10-30 10:47:10.526387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:49.061 [2024-10-30 10:47:10.526536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:49.061 spare 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.061 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.321 [2024-10-30 10:47:10.626662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:49.321 [2024-10-30 10:47:10.626691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:49.321 [2024-10-30 10:47:10.627132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:18:49.321 [2024-10-30 10:47:10.627404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:49.321 [2024-10-30 10:47:10.627438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:49.321 [2024-10-30 10:47:10.627677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.321 "name": "raid_bdev1", 00:18:49.321 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:49.321 "strip_size_kb": 0, 00:18:49.321 "state": "online", 00:18:49.321 "raid_level": "raid1", 00:18:49.321 "superblock": true, 00:18:49.321 "num_base_bdevs": 4, 00:18:49.321 "num_base_bdevs_discovered": 3, 00:18:49.321 "num_base_bdevs_operational": 3, 00:18:49.321 "base_bdevs_list": [ 00:18:49.321 { 00:18:49.321 "name": "spare", 00:18:49.321 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:49.321 "is_configured": true, 00:18:49.321 "data_offset": 2048, 00:18:49.321 "data_size": 63488 00:18:49.321 }, 00:18:49.321 { 00:18:49.321 "name": null, 00:18:49.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.321 "is_configured": false, 00:18:49.321 "data_offset": 2048, 00:18:49.321 "data_size": 63488 00:18:49.321 }, 00:18:49.321 { 00:18:49.321 "name": "BaseBdev3", 00:18:49.321 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:49.321 "is_configured": true, 00:18:49.321 "data_offset": 2048, 00:18:49.321 "data_size": 63488 00:18:49.321 }, 00:18:49.321 { 00:18:49.321 "name": "BaseBdev4", 00:18:49.321 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:49.321 "is_configured": true, 00:18:49.321 "data_offset": 2048, 00:18:49.321 "data_size": 63488 00:18:49.321 } 00:18:49.321 ] 00:18:49.321 }' 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.321 10:47:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.889 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.889 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.890 "name": "raid_bdev1", 00:18:49.890 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:49.890 "strip_size_kb": 0, 00:18:49.890 "state": "online", 00:18:49.890 "raid_level": "raid1", 00:18:49.890 "superblock": true, 00:18:49.890 "num_base_bdevs": 4, 00:18:49.890 "num_base_bdevs_discovered": 3, 00:18:49.890 "num_base_bdevs_operational": 3, 00:18:49.890 "base_bdevs_list": [ 00:18:49.890 { 00:18:49.890 "name": "spare", 00:18:49.890 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:49.890 "is_configured": true, 00:18:49.890 "data_offset": 2048, 00:18:49.890 "data_size": 63488 00:18:49.890 }, 00:18:49.890 { 00:18:49.890 "name": null, 00:18:49.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.890 "is_configured": false, 00:18:49.890 "data_offset": 2048, 00:18:49.890 "data_size": 63488 00:18:49.890 }, 00:18:49.890 { 00:18:49.890 "name": "BaseBdev3", 00:18:49.890 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:49.890 "is_configured": true, 00:18:49.890 "data_offset": 2048, 00:18:49.890 "data_size": 63488 00:18:49.890 }, 00:18:49.890 { 00:18:49.890 "name": "BaseBdev4", 00:18:49.890 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:49.890 "is_configured": true, 00:18:49.890 "data_offset": 2048, 00:18:49.890 "data_size": 63488 00:18:49.890 } 00:18:49.890 ] 00:18:49.890 }' 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.890 [2024-10-30 10:47:11.323848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.890 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.150 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.150 "name": "raid_bdev1", 00:18:50.150 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:50.150 "strip_size_kb": 0, 00:18:50.150 "state": "online", 00:18:50.150 "raid_level": "raid1", 00:18:50.150 "superblock": true, 00:18:50.150 "num_base_bdevs": 4, 00:18:50.150 "num_base_bdevs_discovered": 2, 00:18:50.150 "num_base_bdevs_operational": 2, 00:18:50.150 "base_bdevs_list": [ 00:18:50.150 { 00:18:50.150 "name": null, 00:18:50.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.150 "is_configured": false, 00:18:50.150 "data_offset": 0, 00:18:50.150 "data_size": 63488 00:18:50.150 }, 00:18:50.150 { 00:18:50.150 "name": null, 00:18:50.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.150 "is_configured": false, 00:18:50.150 "data_offset": 2048, 00:18:50.150 "data_size": 63488 00:18:50.150 }, 00:18:50.150 { 00:18:50.150 "name": "BaseBdev3", 00:18:50.150 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:50.150 "is_configured": true, 00:18:50.150 "data_offset": 2048, 00:18:50.150 "data_size": 63488 00:18:50.150 }, 00:18:50.150 { 00:18:50.150 "name": "BaseBdev4", 00:18:50.150 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:50.150 "is_configured": true, 00:18:50.150 "data_offset": 2048, 00:18:50.150 "data_size": 63488 00:18:50.150 } 00:18:50.150 ] 00:18:50.150 }' 00:18:50.150 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.150 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.408 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.408 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.408 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.408 [2024-10-30 10:47:11.828032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.408 [2024-10-30 10:47:11.828293] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:50.408 [2024-10-30 10:47:11.828340] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:50.408 [2024-10-30 10:47:11.828396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.408 [2024-10-30 10:47:11.842682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:18:50.408 10:47:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.408 10:47:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:50.408 [2024-10-30 10:47:11.845392] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.785 "name": "raid_bdev1", 00:18:51.785 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:51.785 "strip_size_kb": 0, 00:18:51.785 "state": "online", 00:18:51.785 "raid_level": "raid1", 00:18:51.785 "superblock": true, 00:18:51.785 "num_base_bdevs": 4, 00:18:51.785 "num_base_bdevs_discovered": 3, 00:18:51.785 "num_base_bdevs_operational": 3, 00:18:51.785 "process": { 00:18:51.785 "type": "rebuild", 00:18:51.785 "target": "spare", 00:18:51.785 "progress": { 00:18:51.785 "blocks": 20480, 00:18:51.785 "percent": 32 00:18:51.785 } 00:18:51.785 }, 00:18:51.785 "base_bdevs_list": [ 00:18:51.785 { 00:18:51.785 "name": "spare", 00:18:51.785 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:51.785 "is_configured": true, 00:18:51.785 "data_offset": 2048, 00:18:51.785 "data_size": 63488 00:18:51.785 }, 00:18:51.785 { 00:18:51.785 "name": null, 00:18:51.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.785 "is_configured": false, 00:18:51.785 "data_offset": 2048, 00:18:51.785 "data_size": 63488 00:18:51.785 }, 00:18:51.785 { 00:18:51.785 "name": "BaseBdev3", 00:18:51.785 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:51.785 "is_configured": true, 00:18:51.785 "data_offset": 2048, 00:18:51.785 "data_size": 63488 00:18:51.785 }, 00:18:51.785 { 00:18:51.785 "name": "BaseBdev4", 00:18:51.785 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:51.785 "is_configured": true, 00:18:51.785 "data_offset": 2048, 00:18:51.785 "data_size": 63488 00:18:51.785 } 00:18:51.785 ] 00:18:51.785 }' 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.785 10:47:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.785 [2024-10-30 10:47:13.014993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.785 [2024-10-30 10:47:13.053774] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:51.785 [2024-10-30 10:47:13.053896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.785 [2024-10-30 10:47:13.053926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:51.785 [2024-10-30 10:47:13.053937] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.785 "name": "raid_bdev1", 00:18:51.785 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:51.785 "strip_size_kb": 0, 00:18:51.785 "state": "online", 00:18:51.785 "raid_level": "raid1", 00:18:51.785 "superblock": true, 00:18:51.785 "num_base_bdevs": 4, 00:18:51.785 "num_base_bdevs_discovered": 2, 00:18:51.785 "num_base_bdevs_operational": 2, 00:18:51.785 "base_bdevs_list": [ 00:18:51.785 { 00:18:51.785 "name": null, 00:18:51.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.785 "is_configured": false, 00:18:51.785 "data_offset": 0, 00:18:51.785 "data_size": 63488 00:18:51.785 }, 00:18:51.785 { 00:18:51.785 "name": null, 00:18:51.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.785 "is_configured": false, 00:18:51.785 "data_offset": 2048, 00:18:51.785 "data_size": 63488 00:18:51.785 }, 00:18:51.785 { 00:18:51.785 "name": "BaseBdev3", 00:18:51.785 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:51.785 "is_configured": true, 00:18:51.785 "data_offset": 2048, 00:18:51.785 "data_size": 63488 00:18:51.785 }, 00:18:51.785 { 00:18:51.785 "name": "BaseBdev4", 00:18:51.785 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:51.785 "is_configured": true, 00:18:51.785 "data_offset": 2048, 00:18:51.785 "data_size": 63488 00:18:51.785 } 00:18:51.785 ] 00:18:51.785 }' 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.785 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.354 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:52.354 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.354 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.354 [2024-10-30 10:47:13.629524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:52.354 [2024-10-30 10:47:13.629651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.354 [2024-10-30 10:47:13.629694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:52.354 [2024-10-30 10:47:13.629711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.354 [2024-10-30 10:47:13.630360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.354 [2024-10-30 10:47:13.630423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:52.354 [2024-10-30 10:47:13.630544] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:52.354 [2024-10-30 10:47:13.630563] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:52.354 [2024-10-30 10:47:13.630582] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:52.354 [2024-10-30 10:47:13.630637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.354 [2024-10-30 10:47:13.645040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:18:52.354 spare 00:18:52.355 10:47:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.355 10:47:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:52.355 [2024-10-30 10:47:13.647875] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.294 "name": "raid_bdev1", 00:18:53.294 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:53.294 "strip_size_kb": 0, 00:18:53.294 "state": "online", 00:18:53.294 "raid_level": "raid1", 00:18:53.294 "superblock": true, 00:18:53.294 "num_base_bdevs": 4, 00:18:53.294 "num_base_bdevs_discovered": 3, 00:18:53.294 "num_base_bdevs_operational": 3, 00:18:53.294 "process": { 00:18:53.294 "type": "rebuild", 00:18:53.294 "target": "spare", 00:18:53.294 "progress": { 00:18:53.294 "blocks": 20480, 00:18:53.294 "percent": 32 00:18:53.294 } 00:18:53.294 }, 00:18:53.294 "base_bdevs_list": [ 00:18:53.294 { 00:18:53.294 "name": "spare", 00:18:53.294 "uuid": "954701ee-709a-5bae-bf69-dd0e12effc72", 00:18:53.294 "is_configured": true, 00:18:53.294 "data_offset": 2048, 00:18:53.294 "data_size": 63488 00:18:53.294 }, 00:18:53.294 { 00:18:53.294 "name": null, 00:18:53.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.294 "is_configured": false, 00:18:53.294 "data_offset": 2048, 00:18:53.294 "data_size": 63488 00:18:53.294 }, 00:18:53.294 { 00:18:53.294 "name": "BaseBdev3", 00:18:53.294 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:53.294 "is_configured": true, 00:18:53.294 "data_offset": 2048, 00:18:53.294 "data_size": 63488 00:18:53.294 }, 00:18:53.294 { 00:18:53.294 "name": "BaseBdev4", 00:18:53.294 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:53.294 "is_configured": true, 00:18:53.294 "data_offset": 2048, 00:18:53.294 "data_size": 63488 00:18:53.294 } 00:18:53.294 ] 00:18:53.294 }' 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.294 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.553 [2024-10-30 10:47:14.801153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.553 [2024-10-30 10:47:14.857149] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:53.553 [2024-10-30 10:47:14.857233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.553 [2024-10-30 10:47:14.857259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:53.553 [2024-10-30 10:47:14.857274] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.553 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.553 "name": "raid_bdev1", 00:18:53.553 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:53.553 "strip_size_kb": 0, 00:18:53.553 "state": "online", 00:18:53.553 "raid_level": "raid1", 00:18:53.553 "superblock": true, 00:18:53.553 "num_base_bdevs": 4, 00:18:53.553 "num_base_bdevs_discovered": 2, 00:18:53.553 "num_base_bdevs_operational": 2, 00:18:53.553 "base_bdevs_list": [ 00:18:53.553 { 00:18:53.553 "name": null, 00:18:53.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.553 "is_configured": false, 00:18:53.553 "data_offset": 0, 00:18:53.553 "data_size": 63488 00:18:53.553 }, 00:18:53.553 { 00:18:53.553 "name": null, 00:18:53.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.553 "is_configured": false, 00:18:53.553 "data_offset": 2048, 00:18:53.553 "data_size": 63488 00:18:53.553 }, 00:18:53.553 { 00:18:53.553 "name": "BaseBdev3", 00:18:53.553 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:53.553 "is_configured": true, 00:18:53.553 "data_offset": 2048, 00:18:53.553 "data_size": 63488 00:18:53.553 }, 00:18:53.553 { 00:18:53.553 "name": "BaseBdev4", 00:18:53.553 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:53.553 "is_configured": true, 00:18:53.553 "data_offset": 2048, 00:18:53.554 "data_size": 63488 00:18:53.554 } 00:18:53.554 ] 00:18:53.554 }' 00:18:53.554 10:47:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.554 10:47:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.119 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.119 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.119 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.119 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.119 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.120 "name": "raid_bdev1", 00:18:54.120 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:54.120 "strip_size_kb": 0, 00:18:54.120 "state": "online", 00:18:54.120 "raid_level": "raid1", 00:18:54.120 "superblock": true, 00:18:54.120 "num_base_bdevs": 4, 00:18:54.120 "num_base_bdevs_discovered": 2, 00:18:54.120 "num_base_bdevs_operational": 2, 00:18:54.120 "base_bdevs_list": [ 00:18:54.120 { 00:18:54.120 "name": null, 00:18:54.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.120 "is_configured": false, 00:18:54.120 "data_offset": 0, 00:18:54.120 "data_size": 63488 00:18:54.120 }, 00:18:54.120 { 00:18:54.120 "name": null, 00:18:54.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.120 "is_configured": false, 00:18:54.120 "data_offset": 2048, 00:18:54.120 "data_size": 63488 00:18:54.120 }, 00:18:54.120 { 00:18:54.120 "name": "BaseBdev3", 00:18:54.120 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:54.120 "is_configured": true, 00:18:54.120 "data_offset": 2048, 00:18:54.120 "data_size": 63488 00:18:54.120 }, 00:18:54.120 { 00:18:54.120 "name": "BaseBdev4", 00:18:54.120 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:54.120 "is_configured": true, 00:18:54.120 "data_offset": 2048, 00:18:54.120 "data_size": 63488 00:18:54.120 } 00:18:54.120 ] 00:18:54.120 }' 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.120 10:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.120 [2024-10-30 10:47:15.585162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:54.120 [2024-10-30 10:47:15.585242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.120 [2024-10-30 10:47:15.585273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:54.120 [2024-10-30 10:47:15.585292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.120 [2024-10-30 10:47:15.585851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.120 [2024-10-30 10:47:15.585889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:54.120 [2024-10-30 10:47:15.586005] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:54.120 [2024-10-30 10:47:15.586032] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:54.120 [2024-10-30 10:47:15.586044] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:54.120 [2024-10-30 10:47:15.586072] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:54.120 BaseBdev1 00:18:54.378 10:47:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.378 10:47:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.313 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.313 "name": "raid_bdev1", 00:18:55.313 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:55.313 "strip_size_kb": 0, 00:18:55.313 "state": "online", 00:18:55.313 "raid_level": "raid1", 00:18:55.313 "superblock": true, 00:18:55.313 "num_base_bdevs": 4, 00:18:55.313 "num_base_bdevs_discovered": 2, 00:18:55.313 "num_base_bdevs_operational": 2, 00:18:55.313 "base_bdevs_list": [ 00:18:55.313 { 00:18:55.313 "name": null, 00:18:55.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.313 "is_configured": false, 00:18:55.313 "data_offset": 0, 00:18:55.313 "data_size": 63488 00:18:55.313 }, 00:18:55.313 { 00:18:55.313 "name": null, 00:18:55.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.313 "is_configured": false, 00:18:55.313 "data_offset": 2048, 00:18:55.313 "data_size": 63488 00:18:55.313 }, 00:18:55.313 { 00:18:55.313 "name": "BaseBdev3", 00:18:55.313 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:55.313 "is_configured": true, 00:18:55.313 "data_offset": 2048, 00:18:55.313 "data_size": 63488 00:18:55.313 }, 00:18:55.313 { 00:18:55.314 "name": "BaseBdev4", 00:18:55.314 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:55.314 "is_configured": true, 00:18:55.314 "data_offset": 2048, 00:18:55.314 "data_size": 63488 00:18:55.314 } 00:18:55.314 ] 00:18:55.314 }' 00:18:55.314 10:47:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.314 10:47:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.882 "name": "raid_bdev1", 00:18:55.882 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:55.882 "strip_size_kb": 0, 00:18:55.882 "state": "online", 00:18:55.882 "raid_level": "raid1", 00:18:55.882 "superblock": true, 00:18:55.882 "num_base_bdevs": 4, 00:18:55.882 "num_base_bdevs_discovered": 2, 00:18:55.882 "num_base_bdevs_operational": 2, 00:18:55.882 "base_bdevs_list": [ 00:18:55.882 { 00:18:55.882 "name": null, 00:18:55.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.882 "is_configured": false, 00:18:55.882 "data_offset": 0, 00:18:55.882 "data_size": 63488 00:18:55.882 }, 00:18:55.882 { 00:18:55.882 "name": null, 00:18:55.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.882 "is_configured": false, 00:18:55.882 "data_offset": 2048, 00:18:55.882 "data_size": 63488 00:18:55.882 }, 00:18:55.882 { 00:18:55.882 "name": "BaseBdev3", 00:18:55.882 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:55.882 "is_configured": true, 00:18:55.882 "data_offset": 2048, 00:18:55.882 "data_size": 63488 00:18:55.882 }, 00:18:55.882 { 00:18:55.882 "name": "BaseBdev4", 00:18:55.882 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:55.882 "is_configured": true, 00:18:55.882 "data_offset": 2048, 00:18:55.882 "data_size": 63488 00:18:55.882 } 00:18:55.882 ] 00:18:55.882 }' 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.882 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.882 [2024-10-30 10:47:17.321814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.882 [2024-10-30 10:47:17.322116] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:55.882 [2024-10-30 10:47:17.322137] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:55.882 request: 00:18:55.882 { 00:18:55.882 "base_bdev": "BaseBdev1", 00:18:55.882 "raid_bdev": "raid_bdev1", 00:18:55.882 "method": "bdev_raid_add_base_bdev", 00:18:55.883 "req_id": 1 00:18:55.883 } 00:18:55.883 Got JSON-RPC error response 00:18:55.883 response: 00:18:55.883 { 00:18:55.883 "code": -22, 00:18:55.883 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:55.883 } 00:18:55.883 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:55.883 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:55.883 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:55.883 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:55.883 10:47:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:55.883 10:47:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.259 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.259 "name": "raid_bdev1", 00:18:57.259 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:57.259 "strip_size_kb": 0, 00:18:57.259 "state": "online", 00:18:57.259 "raid_level": "raid1", 00:18:57.259 "superblock": true, 00:18:57.259 "num_base_bdevs": 4, 00:18:57.259 "num_base_bdevs_discovered": 2, 00:18:57.259 "num_base_bdevs_operational": 2, 00:18:57.259 "base_bdevs_list": [ 00:18:57.259 { 00:18:57.259 "name": null, 00:18:57.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.260 "is_configured": false, 00:18:57.260 "data_offset": 0, 00:18:57.260 "data_size": 63488 00:18:57.260 }, 00:18:57.260 { 00:18:57.260 "name": null, 00:18:57.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.260 "is_configured": false, 00:18:57.260 "data_offset": 2048, 00:18:57.260 "data_size": 63488 00:18:57.260 }, 00:18:57.260 { 00:18:57.260 "name": "BaseBdev3", 00:18:57.260 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:57.260 "is_configured": true, 00:18:57.260 "data_offset": 2048, 00:18:57.260 "data_size": 63488 00:18:57.260 }, 00:18:57.260 { 00:18:57.260 "name": "BaseBdev4", 00:18:57.260 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:57.260 "is_configured": true, 00:18:57.260 "data_offset": 2048, 00:18:57.260 "data_size": 63488 00:18:57.260 } 00:18:57.260 ] 00:18:57.260 }' 00:18:57.260 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.260 10:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.525 "name": "raid_bdev1", 00:18:57.525 "uuid": "8500d310-ad6d-4183-ba38-54a261d8dae6", 00:18:57.525 "strip_size_kb": 0, 00:18:57.525 "state": "online", 00:18:57.525 "raid_level": "raid1", 00:18:57.525 "superblock": true, 00:18:57.525 "num_base_bdevs": 4, 00:18:57.525 "num_base_bdevs_discovered": 2, 00:18:57.525 "num_base_bdevs_operational": 2, 00:18:57.525 "base_bdevs_list": [ 00:18:57.525 { 00:18:57.525 "name": null, 00:18:57.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.525 "is_configured": false, 00:18:57.525 "data_offset": 0, 00:18:57.525 "data_size": 63488 00:18:57.525 }, 00:18:57.525 { 00:18:57.525 "name": null, 00:18:57.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.525 "is_configured": false, 00:18:57.525 "data_offset": 2048, 00:18:57.525 "data_size": 63488 00:18:57.525 }, 00:18:57.525 { 00:18:57.525 "name": "BaseBdev3", 00:18:57.525 "uuid": "22738c9f-66dd-5b60-aa4d-69c50c017e3e", 00:18:57.525 "is_configured": true, 00:18:57.525 "data_offset": 2048, 00:18:57.525 "data_size": 63488 00:18:57.525 }, 00:18:57.525 { 00:18:57.525 "name": "BaseBdev4", 00:18:57.525 "uuid": "476b0516-dc73-5590-ba79-de31311cf6fc", 00:18:57.525 "is_configured": true, 00:18:57.525 "data_offset": 2048, 00:18:57.525 "data_size": 63488 00:18:57.525 } 00:18:57.525 ] 00:18:57.525 }' 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:57.525 10:47:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.792 10:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:57.792 10:47:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78422 00:18:57.792 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78422 ']' 00:18:57.792 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78422 00:18:57.792 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:57.792 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:57.792 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78422 00:18:57.792 killing process with pid 78422 00:18:57.792 Received shutdown signal, test time was about 60.000000 seconds 00:18:57.792 00:18:57.792 Latency(us) 00:18:57.792 [2024-10-30T10:47:19.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.792 [2024-10-30T10:47:19.263Z] =================================================================================================================== 00:18:57.793 [2024-10-30T10:47:19.263Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.793 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:57.793 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:57.793 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78422' 00:18:57.793 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78422 00:18:57.793 [2024-10-30 10:47:19.055368] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:57.793 10:47:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78422 00:18:57.793 [2024-10-30 10:47:19.055545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.793 [2024-10-30 10:47:19.055649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.793 [2024-10-30 10:47:19.055665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:58.053 [2024-10-30 10:47:19.484596] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:59.430 00:18:59.430 real 0m29.366s 00:18:59.430 user 0m35.585s 00:18:59.430 sys 0m4.117s 00:18:59.430 ************************************ 00:18:59.430 END TEST raid_rebuild_test_sb 00:18:59.430 ************************************ 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.430 10:47:20 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:18:59.430 10:47:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:59.430 10:47:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:59.430 10:47:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.430 ************************************ 00:18:59.430 START TEST raid_rebuild_test_io 00:18:59.430 ************************************ 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79221 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79221 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 79221 ']' 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:59.430 10:47:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:59.430 [2024-10-30 10:47:20.633146] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:18:59.430 [2024-10-30 10:47:20.633625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79221 ] 00:18:59.430 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:59.430 Zero copy mechanism will not be used. 00:18:59.430 [2024-10-30 10:47:20.810608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.689 [2024-10-30 10:47:20.943289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.689 [2024-10-30 10:47:21.141124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.689 [2024-10-30 10:47:21.141463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.256 BaseBdev1_malloc 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.256 [2024-10-30 10:47:21.701132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:00.256 [2024-10-30 10:47:21.701346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.256 [2024-10-30 10:47:21.701421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:00.256 [2024-10-30 10:47:21.701686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.256 [2024-10-30 10:47:21.704701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.256 BaseBdev1 00:19:00.256 [2024-10-30 10:47:21.704918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.256 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.512 BaseBdev2_malloc 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.512 [2024-10-30 10:47:21.757193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:00.512 [2024-10-30 10:47:21.757389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.512 [2024-10-30 10:47:21.757465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:00.512 [2024-10-30 10:47:21.757629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.512 [2024-10-30 10:47:21.760508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.512 [2024-10-30 10:47:21.760664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:00.512 BaseBdev2 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.512 BaseBdev3_malloc 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.512 [2024-10-30 10:47:21.822328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:00.512 [2024-10-30 10:47:21.822559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.512 [2024-10-30 10:47:21.822634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:00.512 [2024-10-30 10:47:21.822748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.512 [2024-10-30 10:47:21.825742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.512 [2024-10-30 10:47:21.825902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:00.512 BaseBdev3 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.512 BaseBdev4_malloc 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.512 [2024-10-30 10:47:21.871876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:00.512 [2024-10-30 10:47:21.871936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.512 [2024-10-30 10:47:21.871963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:00.512 [2024-10-30 10:47:21.871994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.512 [2024-10-30 10:47:21.874890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.512 [2024-10-30 10:47:21.875155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:00.512 BaseBdev4 00:19:00.512 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.513 spare_malloc 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.513 spare_delay 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.513 [2024-10-30 10:47:21.934818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.513 [2024-10-30 10:47:21.935020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.513 [2024-10-30 10:47:21.935092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:00.513 [2024-10-30 10:47:21.935117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.513 [2024-10-30 10:47:21.938231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.513 [2024-10-30 10:47:21.938279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.513 spare 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.513 [2024-10-30 10:47:21.942900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.513 [2024-10-30 10:47:21.945530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.513 [2024-10-30 10:47:21.945632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.513 [2024-10-30 10:47:21.945710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:00.513 [2024-10-30 10:47:21.945821] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:00.513 [2024-10-30 10:47:21.945840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:00.513 [2024-10-30 10:47:21.946247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:00.513 [2024-10-30 10:47:21.946468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:00.513 [2024-10-30 10:47:21.946487] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:00.513 [2024-10-30 10:47:21.946672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.513 10:47:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.770 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.770 "name": "raid_bdev1", 00:19:00.770 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:00.770 "strip_size_kb": 0, 00:19:00.770 "state": "online", 00:19:00.770 "raid_level": "raid1", 00:19:00.770 "superblock": false, 00:19:00.770 "num_base_bdevs": 4, 00:19:00.770 "num_base_bdevs_discovered": 4, 00:19:00.770 "num_base_bdevs_operational": 4, 00:19:00.770 "base_bdevs_list": [ 00:19:00.770 { 00:19:00.770 "name": "BaseBdev1", 00:19:00.770 "uuid": "8a782740-d2d0-571d-9ed7-d439a8b9d7bb", 00:19:00.770 "is_configured": true, 00:19:00.770 "data_offset": 0, 00:19:00.770 "data_size": 65536 00:19:00.770 }, 00:19:00.770 { 00:19:00.770 "name": "BaseBdev2", 00:19:00.770 "uuid": "eded9db6-5542-5374-abed-46a7e69a4ecd", 00:19:00.770 "is_configured": true, 00:19:00.770 "data_offset": 0, 00:19:00.770 "data_size": 65536 00:19:00.770 }, 00:19:00.770 { 00:19:00.770 "name": "BaseBdev3", 00:19:00.770 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:00.770 "is_configured": true, 00:19:00.770 "data_offset": 0, 00:19:00.770 "data_size": 65536 00:19:00.770 }, 00:19:00.770 { 00:19:00.770 "name": "BaseBdev4", 00:19:00.770 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:00.770 "is_configured": true, 00:19:00.770 "data_offset": 0, 00:19:00.770 "data_size": 65536 00:19:00.770 } 00:19:00.770 ] 00:19:00.770 }' 00:19:00.770 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.770 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:01.337 [2024-10-30 10:47:22.515531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.337 [2024-10-30 10:47:22.615086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.337 "name": "raid_bdev1", 00:19:01.337 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:01.337 "strip_size_kb": 0, 00:19:01.337 "state": "online", 00:19:01.337 "raid_level": "raid1", 00:19:01.337 "superblock": false, 00:19:01.337 "num_base_bdevs": 4, 00:19:01.337 "num_base_bdevs_discovered": 3, 00:19:01.337 "num_base_bdevs_operational": 3, 00:19:01.337 "base_bdevs_list": [ 00:19:01.337 { 00:19:01.337 "name": null, 00:19:01.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.337 "is_configured": false, 00:19:01.337 "data_offset": 0, 00:19:01.337 "data_size": 65536 00:19:01.337 }, 00:19:01.337 { 00:19:01.337 "name": "BaseBdev2", 00:19:01.337 "uuid": "eded9db6-5542-5374-abed-46a7e69a4ecd", 00:19:01.337 "is_configured": true, 00:19:01.337 "data_offset": 0, 00:19:01.337 "data_size": 65536 00:19:01.337 }, 00:19:01.337 { 00:19:01.337 "name": "BaseBdev3", 00:19:01.337 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:01.337 "is_configured": true, 00:19:01.337 "data_offset": 0, 00:19:01.337 "data_size": 65536 00:19:01.337 }, 00:19:01.337 { 00:19:01.337 "name": "BaseBdev4", 00:19:01.337 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:01.337 "is_configured": true, 00:19:01.337 "data_offset": 0, 00:19:01.337 "data_size": 65536 00:19:01.337 } 00:19:01.337 ] 00:19:01.337 }' 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.337 10:47:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.337 [2024-10-30 10:47:22.767289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:01.337 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:01.337 Zero copy mechanism will not be used. 00:19:01.337 Running I/O for 60 seconds... 00:19:01.945 10:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.945 10:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.945 10:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.945 [2024-10-30 10:47:23.142191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.945 10:47:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.945 10:47:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:01.945 [2024-10-30 10:47:23.215667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:01.945 [2024-10-30 10:47:23.218301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:01.945 [2024-10-30 10:47:23.367086] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:02.205 [2024-10-30 10:47:23.592644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:02.205 [2024-10-30 10:47:23.593530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:02.722 171.00 IOPS, 513.00 MiB/s [2024-10-30T10:47:24.192Z] [2024-10-30 10:47:23.948729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:02.722 [2024-10-30 10:47:24.177911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:02.722 [2024-10-30 10:47:24.178295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:02.722 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.722 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.722 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.722 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.722 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.722 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.722 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.722 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.722 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.978 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.978 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.978 "name": "raid_bdev1", 00:19:02.978 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:02.978 "strip_size_kb": 0, 00:19:02.978 "state": "online", 00:19:02.978 "raid_level": "raid1", 00:19:02.978 "superblock": false, 00:19:02.978 "num_base_bdevs": 4, 00:19:02.978 "num_base_bdevs_discovered": 4, 00:19:02.978 "num_base_bdevs_operational": 4, 00:19:02.978 "process": { 00:19:02.978 "type": "rebuild", 00:19:02.978 "target": "spare", 00:19:02.978 "progress": { 00:19:02.979 "blocks": 10240, 00:19:02.979 "percent": 15 00:19:02.979 } 00:19:02.979 }, 00:19:02.979 "base_bdevs_list": [ 00:19:02.979 { 00:19:02.979 "name": "spare", 00:19:02.979 "uuid": "f498009e-6f7c-5161-b61d-bbd0fa2057c9", 00:19:02.979 "is_configured": true, 00:19:02.979 "data_offset": 0, 00:19:02.979 "data_size": 65536 00:19:02.979 }, 00:19:02.979 { 00:19:02.979 "name": "BaseBdev2", 00:19:02.979 "uuid": "eded9db6-5542-5374-abed-46a7e69a4ecd", 00:19:02.979 "is_configured": true, 00:19:02.979 "data_offset": 0, 00:19:02.979 "data_size": 65536 00:19:02.979 }, 00:19:02.979 { 00:19:02.979 "name": "BaseBdev3", 00:19:02.979 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:02.979 "is_configured": true, 00:19:02.979 "data_offset": 0, 00:19:02.979 "data_size": 65536 00:19:02.979 }, 00:19:02.979 { 00:19:02.979 "name": "BaseBdev4", 00:19:02.979 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:02.979 "is_configured": true, 00:19:02.979 "data_offset": 0, 00:19:02.979 "data_size": 65536 00:19:02.979 } 00:19:02.979 ] 00:19:02.979 }' 00:19:02.979 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.979 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.979 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.979 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.979 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:02.979 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.979 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.979 [2024-10-30 10:47:24.350839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.236 [2024-10-30 10:47:24.486013] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:03.236 [2024-10-30 10:47:24.489098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.236 [2024-10-30 10:47:24.489142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.236 [2024-10-30 10:47:24.489183] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:03.236 [2024-10-30 10:47:24.512232] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.236 "name": "raid_bdev1", 00:19:03.236 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:03.236 "strip_size_kb": 0, 00:19:03.236 "state": "online", 00:19:03.236 "raid_level": "raid1", 00:19:03.236 "superblock": false, 00:19:03.236 "num_base_bdevs": 4, 00:19:03.236 "num_base_bdevs_discovered": 3, 00:19:03.236 "num_base_bdevs_operational": 3, 00:19:03.236 "base_bdevs_list": [ 00:19:03.236 { 00:19:03.236 "name": null, 00:19:03.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.236 "is_configured": false, 00:19:03.236 "data_offset": 0, 00:19:03.236 "data_size": 65536 00:19:03.236 }, 00:19:03.236 { 00:19:03.236 "name": "BaseBdev2", 00:19:03.236 "uuid": "eded9db6-5542-5374-abed-46a7e69a4ecd", 00:19:03.236 "is_configured": true, 00:19:03.236 "data_offset": 0, 00:19:03.236 "data_size": 65536 00:19:03.236 }, 00:19:03.236 { 00:19:03.236 "name": "BaseBdev3", 00:19:03.236 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:03.236 "is_configured": true, 00:19:03.236 "data_offset": 0, 00:19:03.236 "data_size": 65536 00:19:03.236 }, 00:19:03.236 { 00:19:03.236 "name": "BaseBdev4", 00:19:03.236 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:03.236 "is_configured": true, 00:19:03.236 "data_offset": 0, 00:19:03.236 "data_size": 65536 00:19:03.236 } 00:19:03.236 ] 00:19:03.236 }' 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.236 10:47:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 131.50 IOPS, 394.50 MiB/s [2024-10-30T10:47:25.222Z] 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.752 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.752 "name": "raid_bdev1", 00:19:03.752 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:03.752 "strip_size_kb": 0, 00:19:03.752 "state": "online", 00:19:03.752 "raid_level": "raid1", 00:19:03.752 "superblock": false, 00:19:03.752 "num_base_bdevs": 4, 00:19:03.752 "num_base_bdevs_discovered": 3, 00:19:03.752 "num_base_bdevs_operational": 3, 00:19:03.752 "base_bdevs_list": [ 00:19:03.752 { 00:19:03.752 "name": null, 00:19:03.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.752 "is_configured": false, 00:19:03.752 "data_offset": 0, 00:19:03.752 "data_size": 65536 00:19:03.752 }, 00:19:03.752 { 00:19:03.752 "name": "BaseBdev2", 00:19:03.752 "uuid": "eded9db6-5542-5374-abed-46a7e69a4ecd", 00:19:03.752 "is_configured": true, 00:19:03.752 "data_offset": 0, 00:19:03.752 "data_size": 65536 00:19:03.752 }, 00:19:03.752 { 00:19:03.752 "name": "BaseBdev3", 00:19:03.752 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:03.752 "is_configured": true, 00:19:03.752 "data_offset": 0, 00:19:03.752 "data_size": 65536 00:19:03.752 }, 00:19:03.752 { 00:19:03.752 "name": "BaseBdev4", 00:19:03.753 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:03.753 "is_configured": true, 00:19:03.753 "data_offset": 0, 00:19:03.753 "data_size": 65536 00:19:03.753 } 00:19:03.753 ] 00:19:03.753 }' 00:19:03.753 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.753 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.753 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.753 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.753 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:03.753 10:47:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.753 10:47:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.753 [2024-10-30 10:47:25.218528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.011 10:47:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.011 10:47:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:04.011 [2024-10-30 10:47:25.290739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:04.011 [2024-10-30 10:47:25.293443] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.011 [2024-10-30 10:47:25.414081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:04.011 [2024-10-30 10:47:25.414736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:04.270 [2024-10-30 10:47:25.562939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:04.528 139.00 IOPS, 417.00 MiB/s [2024-10-30T10:47:25.998Z] [2024-10-30 10:47:25.900864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:04.786 [2024-10-30 10:47:26.117003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.044 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.044 "name": "raid_bdev1", 00:19:05.045 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:05.045 "strip_size_kb": 0, 00:19:05.045 "state": "online", 00:19:05.045 "raid_level": "raid1", 00:19:05.045 "superblock": false, 00:19:05.045 "num_base_bdevs": 4, 00:19:05.045 "num_base_bdevs_discovered": 4, 00:19:05.045 "num_base_bdevs_operational": 4, 00:19:05.045 "process": { 00:19:05.045 "type": "rebuild", 00:19:05.045 "target": "spare", 00:19:05.045 "progress": { 00:19:05.045 "blocks": 10240, 00:19:05.045 "percent": 15 00:19:05.045 } 00:19:05.045 }, 00:19:05.045 "base_bdevs_list": [ 00:19:05.045 { 00:19:05.045 "name": "spare", 00:19:05.045 "uuid": "f498009e-6f7c-5161-b61d-bbd0fa2057c9", 00:19:05.045 "is_configured": true, 00:19:05.045 "data_offset": 0, 00:19:05.045 "data_size": 65536 00:19:05.045 }, 00:19:05.045 { 00:19:05.045 "name": "BaseBdev2", 00:19:05.045 "uuid": "eded9db6-5542-5374-abed-46a7e69a4ecd", 00:19:05.045 "is_configured": true, 00:19:05.045 "data_offset": 0, 00:19:05.045 "data_size": 65536 00:19:05.045 }, 00:19:05.045 { 00:19:05.045 "name": "BaseBdev3", 00:19:05.045 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:05.045 "is_configured": true, 00:19:05.045 "data_offset": 0, 00:19:05.045 "data_size": 65536 00:19:05.045 }, 00:19:05.045 { 00:19:05.045 "name": "BaseBdev4", 00:19:05.045 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:05.045 "is_configured": true, 00:19:05.045 "data_offset": 0, 00:19:05.045 "data_size": 65536 00:19:05.045 } 00:19:05.045 ] 00:19:05.045 }' 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.045 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.045 [2024-10-30 10:47:26.449182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:05.305 [2024-10-30 10:47:26.554362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:05.305 [2024-10-30 10:47:26.554891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:05.305 [2024-10-30 10:47:26.658072] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:05.305 [2024-10-30 10:47:26.658241] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.305 "name": "raid_bdev1", 00:19:05.305 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:05.305 "strip_size_kb": 0, 00:19:05.305 "state": "online", 00:19:05.305 "raid_level": "raid1", 00:19:05.305 "superblock": false, 00:19:05.305 "num_base_bdevs": 4, 00:19:05.305 "num_base_bdevs_discovered": 3, 00:19:05.305 "num_base_bdevs_operational": 3, 00:19:05.305 "process": { 00:19:05.305 "type": "rebuild", 00:19:05.305 "target": "spare", 00:19:05.305 "progress": { 00:19:05.305 "blocks": 16384, 00:19:05.305 "percent": 25 00:19:05.305 } 00:19:05.305 }, 00:19:05.305 "base_bdevs_list": [ 00:19:05.305 { 00:19:05.305 "name": "spare", 00:19:05.305 "uuid": "f498009e-6f7c-5161-b61d-bbd0fa2057c9", 00:19:05.305 "is_configured": true, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 }, 00:19:05.305 { 00:19:05.305 "name": null, 00:19:05.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.305 "is_configured": false, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 }, 00:19:05.305 { 00:19:05.305 "name": "BaseBdev3", 00:19:05.305 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:05.305 "is_configured": true, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 }, 00:19:05.305 { 00:19:05.305 "name": "BaseBdev4", 00:19:05.305 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:05.305 "is_configured": true, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 } 00:19:05.305 ] 00:19:05.305 }' 00:19:05.305 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.579 121.50 IOPS, 364.50 MiB/s [2024-10-30T10:47:27.049Z] 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=520 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.579 "name": "raid_bdev1", 00:19:05.579 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:05.579 "strip_size_kb": 0, 00:19:05.579 "state": "online", 00:19:05.579 "raid_level": "raid1", 00:19:05.579 "superblock": false, 00:19:05.579 "num_base_bdevs": 4, 00:19:05.579 "num_base_bdevs_discovered": 3, 00:19:05.579 "num_base_bdevs_operational": 3, 00:19:05.579 "process": { 00:19:05.579 "type": "rebuild", 00:19:05.579 "target": "spare", 00:19:05.579 "progress": { 00:19:05.579 "blocks": 18432, 00:19:05.579 "percent": 28 00:19:05.579 } 00:19:05.579 }, 00:19:05.579 "base_bdevs_list": [ 00:19:05.579 { 00:19:05.579 "name": "spare", 00:19:05.579 "uuid": "f498009e-6f7c-5161-b61d-bbd0fa2057c9", 00:19:05.579 "is_configured": true, 00:19:05.579 "data_offset": 0, 00:19:05.579 "data_size": 65536 00:19:05.579 }, 00:19:05.579 { 00:19:05.579 "name": null, 00:19:05.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.579 "is_configured": false, 00:19:05.579 "data_offset": 0, 00:19:05.579 "data_size": 65536 00:19:05.579 }, 00:19:05.579 { 00:19:05.579 "name": "BaseBdev3", 00:19:05.579 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:05.579 "is_configured": true, 00:19:05.579 "data_offset": 0, 00:19:05.579 "data_size": 65536 00:19:05.579 }, 00:19:05.579 { 00:19:05.579 "name": "BaseBdev4", 00:19:05.579 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:05.579 "is_configured": true, 00:19:05.579 "data_offset": 0, 00:19:05.579 "data_size": 65536 00:19:05.579 } 00:19:05.579 ] 00:19:05.579 }' 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.579 [2024-10-30 10:47:26.952569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.579 10:47:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:05.837 [2024-10-30 10:47:27.082903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:05.837 [2024-10-30 10:47:27.083225] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:06.404 [2024-10-30 10:47:27.741753] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:06.690 107.20 IOPS, 321.60 MiB/s [2024-10-30T10:47:28.160Z] 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.690 "name": "raid_bdev1", 00:19:06.690 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:06.690 "strip_size_kb": 0, 00:19:06.690 "state": "online", 00:19:06.690 "raid_level": "raid1", 00:19:06.690 "superblock": false, 00:19:06.690 "num_base_bdevs": 4, 00:19:06.690 "num_base_bdevs_discovered": 3, 00:19:06.690 "num_base_bdevs_operational": 3, 00:19:06.690 "process": { 00:19:06.690 "type": "rebuild", 00:19:06.690 "target": "spare", 00:19:06.690 "progress": { 00:19:06.690 "blocks": 36864, 00:19:06.690 "percent": 56 00:19:06.690 } 00:19:06.690 }, 00:19:06.690 "base_bdevs_list": [ 00:19:06.690 { 00:19:06.690 "name": "spare", 00:19:06.690 "uuid": "f498009e-6f7c-5161-b61d-bbd0fa2057c9", 00:19:06.690 "is_configured": true, 00:19:06.690 "data_offset": 0, 00:19:06.690 "data_size": 65536 00:19:06.690 }, 00:19:06.690 { 00:19:06.690 "name": null, 00:19:06.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.690 "is_configured": false, 00:19:06.690 "data_offset": 0, 00:19:06.690 "data_size": 65536 00:19:06.690 }, 00:19:06.690 { 00:19:06.690 "name": "BaseBdev3", 00:19:06.690 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:06.690 "is_configured": true, 00:19:06.690 "data_offset": 0, 00:19:06.690 "data_size": 65536 00:19:06.690 }, 00:19:06.690 { 00:19:06.690 "name": "BaseBdev4", 00:19:06.690 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:06.690 "is_configured": true, 00:19:06.690 "data_offset": 0, 00:19:06.690 "data_size": 65536 00:19:06.690 } 00:19:06.690 ] 00:19:06.690 }' 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.690 [2024-10-30 10:47:28.102451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.690 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.969 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.969 10:47:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:07.535 96.50 IOPS, 289.50 MiB/s [2024-10-30T10:47:29.005Z] [2024-10-30 10:47:28.893354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:07.794 [2024-10-30 10:47:29.015050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.794 "name": "raid_bdev1", 00:19:07.794 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:07.794 "strip_size_kb": 0, 00:19:07.794 "state": "online", 00:19:07.794 "raid_level": "raid1", 00:19:07.794 "superblock": false, 00:19:07.794 "num_base_bdevs": 4, 00:19:07.794 "num_base_bdevs_discovered": 3, 00:19:07.794 "num_base_bdevs_operational": 3, 00:19:07.794 "process": { 00:19:07.794 "type": "rebuild", 00:19:07.794 "target": "spare", 00:19:07.794 "progress": { 00:19:07.794 "blocks": 55296, 00:19:07.794 "percent": 84 00:19:07.794 } 00:19:07.794 }, 00:19:07.794 "base_bdevs_list": [ 00:19:07.794 { 00:19:07.794 "name": "spare", 00:19:07.794 "uuid": "f498009e-6f7c-5161-b61d-bbd0fa2057c9", 00:19:07.794 "is_configured": true, 00:19:07.794 "data_offset": 0, 00:19:07.794 "data_size": 65536 00:19:07.794 }, 00:19:07.794 { 00:19:07.794 "name": null, 00:19:07.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.794 "is_configured": false, 00:19:07.794 "data_offset": 0, 00:19:07.794 "data_size": 65536 00:19:07.794 }, 00:19:07.794 { 00:19:07.794 "name": "BaseBdev3", 00:19:07.794 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:07.794 "is_configured": true, 00:19:07.794 "data_offset": 0, 00:19:07.794 "data_size": 65536 00:19:07.794 }, 00:19:07.794 { 00:19:07.794 "name": "BaseBdev4", 00:19:07.794 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:07.794 "is_configured": true, 00:19:07.794 "data_offset": 0, 00:19:07.794 "data_size": 65536 00:19:07.794 } 00:19:07.794 ] 00:19:07.794 }' 00:19:07.794 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.051 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:08.051 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.051 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.051 10:47:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:08.308 [2024-10-30 10:47:29.688453] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:08.565 [2024-10-30 10:47:29.788445] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:08.565 [2024-10-30 10:47:29.790878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.130 87.57 IOPS, 262.71 MiB/s [2024-10-30T10:47:30.600Z] 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.130 "name": "raid_bdev1", 00:19:09.130 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:09.130 "strip_size_kb": 0, 00:19:09.130 "state": "online", 00:19:09.130 "raid_level": "raid1", 00:19:09.130 "superblock": false, 00:19:09.130 "num_base_bdevs": 4, 00:19:09.130 "num_base_bdevs_discovered": 3, 00:19:09.130 "num_base_bdevs_operational": 3, 00:19:09.130 "base_bdevs_list": [ 00:19:09.130 { 00:19:09.130 "name": "spare", 00:19:09.130 "uuid": "f498009e-6f7c-5161-b61d-bbd0fa2057c9", 00:19:09.130 "is_configured": true, 00:19:09.130 "data_offset": 0, 00:19:09.130 "data_size": 65536 00:19:09.130 }, 00:19:09.130 { 00:19:09.130 "name": null, 00:19:09.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.130 "is_configured": false, 00:19:09.130 "data_offset": 0, 00:19:09.130 "data_size": 65536 00:19:09.130 }, 00:19:09.130 { 00:19:09.130 "name": "BaseBdev3", 00:19:09.130 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:09.130 "is_configured": true, 00:19:09.130 "data_offset": 0, 00:19:09.130 "data_size": 65536 00:19:09.130 }, 00:19:09.130 { 00:19:09.130 "name": "BaseBdev4", 00:19:09.130 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:09.130 "is_configured": true, 00:19:09.130 "data_offset": 0, 00:19:09.130 "data_size": 65536 00:19:09.130 } 00:19:09.130 ] 00:19:09.130 }' 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.130 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.131 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.131 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.131 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.131 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.131 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.131 "name": "raid_bdev1", 00:19:09.131 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:09.131 "strip_size_kb": 0, 00:19:09.131 "state": "online", 00:19:09.131 "raid_level": "raid1", 00:19:09.131 "superblock": false, 00:19:09.131 "num_base_bdevs": 4, 00:19:09.131 "num_base_bdevs_discovered": 3, 00:19:09.131 "num_base_bdevs_operational": 3, 00:19:09.131 "base_bdevs_list": [ 00:19:09.131 { 00:19:09.131 "name": "spare", 00:19:09.131 "uuid": "f498009e-6f7c-5161-b61d-bbd0fa2057c9", 00:19:09.131 "is_configured": true, 00:19:09.131 "data_offset": 0, 00:19:09.131 "data_size": 65536 00:19:09.131 }, 00:19:09.131 { 00:19:09.131 "name": null, 00:19:09.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.131 "is_configured": false, 00:19:09.131 "data_offset": 0, 00:19:09.131 "data_size": 65536 00:19:09.131 }, 00:19:09.131 { 00:19:09.131 "name": "BaseBdev3", 00:19:09.131 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:09.131 "is_configured": true, 00:19:09.131 "data_offset": 0, 00:19:09.131 "data_size": 65536 00:19:09.131 }, 00:19:09.131 { 00:19:09.131 "name": "BaseBdev4", 00:19:09.131 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:09.131 "is_configured": true, 00:19:09.131 "data_offset": 0, 00:19:09.131 "data_size": 65536 00:19:09.131 } 00:19:09.131 ] 00:19:09.131 }' 00:19:09.131 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.390 "name": "raid_bdev1", 00:19:09.390 "uuid": "afac73c5-1db4-4bfe-ba1d-940494fc85fb", 00:19:09.390 "strip_size_kb": 0, 00:19:09.390 "state": "online", 00:19:09.390 "raid_level": "raid1", 00:19:09.390 "superblock": false, 00:19:09.390 "num_base_bdevs": 4, 00:19:09.390 "num_base_bdevs_discovered": 3, 00:19:09.390 "num_base_bdevs_operational": 3, 00:19:09.390 "base_bdevs_list": [ 00:19:09.390 { 00:19:09.390 "name": "spare", 00:19:09.390 "uuid": "f498009e-6f7c-5161-b61d-bbd0fa2057c9", 00:19:09.390 "is_configured": true, 00:19:09.390 "data_offset": 0, 00:19:09.390 "data_size": 65536 00:19:09.390 }, 00:19:09.390 { 00:19:09.390 "name": null, 00:19:09.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.390 "is_configured": false, 00:19:09.390 "data_offset": 0, 00:19:09.390 "data_size": 65536 00:19:09.390 }, 00:19:09.390 { 00:19:09.390 "name": "BaseBdev3", 00:19:09.390 "uuid": "135a1d60-67b9-53e5-ae0d-1f702f1f1240", 00:19:09.390 "is_configured": true, 00:19:09.390 "data_offset": 0, 00:19:09.390 "data_size": 65536 00:19:09.390 }, 00:19:09.390 { 00:19:09.390 "name": "BaseBdev4", 00:19:09.390 "uuid": "d9d7f532-d7fb-54e4-be69-5b8c38bf3b07", 00:19:09.390 "is_configured": true, 00:19:09.390 "data_offset": 0, 00:19:09.390 "data_size": 65536 00:19:09.390 } 00:19:09.390 ] 00:19:09.390 }' 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.390 10:47:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.957 81.25 IOPS, 243.75 MiB/s [2024-10-30T10:47:31.427Z] 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.957 [2024-10-30 10:47:31.197891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:09.957 [2024-10-30 10:47:31.197927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.957 00:19:09.957 Latency(us) 00:19:09.957 [2024-10-30T10:47:31.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.957 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:09.957 raid_bdev1 : 8.48 78.81 236.44 0.00 0.00 17937.35 273.69 122016.12 00:19:09.957 [2024-10-30T10:47:31.427Z] =================================================================================================================== 00:19:09.957 [2024-10-30T10:47:31.427Z] Total : 78.81 236.44 0.00 0.00 17937.35 273.69 122016.12 00:19:09.957 [2024-10-30 10:47:31.264725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.957 [2024-10-30 10:47:31.264795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.957 [2024-10-30 10:47:31.264925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.957 [2024-10-30 10:47:31.264954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:09.957 { 00:19:09.957 "results": [ 00:19:09.957 { 00:19:09.957 "job": "raid_bdev1", 00:19:09.957 "core_mask": "0x1", 00:19:09.957 "workload": "randrw", 00:19:09.957 "percentage": 50, 00:19:09.957 "status": "finished", 00:19:09.957 "queue_depth": 2, 00:19:09.957 "io_size": 3145728, 00:19:09.957 "runtime": 8.475693, 00:19:09.957 "iops": 78.81361441477411, 00:19:09.957 "mibps": 236.4408432443223, 00:19:09.957 "io_failed": 0, 00:19:09.957 "io_timeout": 0, 00:19:09.957 "avg_latency_us": 17937.350027218294, 00:19:09.957 "min_latency_us": 273.6872727272727, 00:19:09.957 "max_latency_us": 122016.11636363636 00:19:09.957 } 00:19:09.957 ], 00:19:09.957 "core_count": 1 00:19:09.957 } 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.957 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:10.215 /dev/nbd0 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.215 1+0 records in 00:19:10.215 1+0 records out 00:19:10.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059035 s, 6.9 MB/s 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:10.215 10:47:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:10.781 /dev/nbd1 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.781 1+0 records in 00:19:10.781 1+0 records out 00:19:10.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304833 s, 13.4 MB/s 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.781 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:11.349 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:11.609 /dev/nbd1 00:19:11.609 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:11.609 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.610 1+0 records in 00:19:11.610 1+0 records out 00:19:11.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381495 s, 10.7 MB/s 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.610 10:47:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:12.178 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:12.436 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79221 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 79221 ']' 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 79221 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79221 00:19:12.437 killing process with pid 79221 00:19:12.437 Received shutdown signal, test time was about 10.966026 seconds 00:19:12.437 00:19:12.437 Latency(us) 00:19:12.437 [2024-10-30T10:47:33.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.437 [2024-10-30T10:47:33.907Z] =================================================================================================================== 00:19:12.437 [2024-10-30T10:47:33.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79221' 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 79221 00:19:12.437 [2024-10-30 10:47:33.736146] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:12.437 10:47:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 79221 00:19:12.695 [2024-10-30 10:47:34.118311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:14.070 00:19:14.070 real 0m14.668s 00:19:14.070 user 0m19.464s 00:19:14.070 sys 0m1.832s 00:19:14.070 ************************************ 00:19:14.070 END TEST raid_rebuild_test_io 00:19:14.070 ************************************ 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:19:14.070 10:47:35 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:19:14.070 10:47:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:14.070 10:47:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:14.070 10:47:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.070 ************************************ 00:19:14.070 START TEST raid_rebuild_test_sb_io 00:19:14.070 ************************************ 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79641 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79641 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79641 ']' 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.070 10:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:14.070 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:14.070 Zero copy mechanism will not be used. 00:19:14.070 [2024-10-30 10:47:35.362684] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:19:14.070 [2024-10-30 10:47:35.362828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79641 ] 00:19:14.070 [2024-10-30 10:47:35.535583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.328 [2024-10-30 10:47:35.660687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.586 [2024-10-30 10:47:35.863750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.586 [2024-10-30 10:47:35.864031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 BaseBdev1_malloc 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 [2024-10-30 10:47:36.413719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:15.158 [2024-10-30 10:47:36.413954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.158 [2024-10-30 10:47:36.414055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:15.158 [2024-10-30 10:47:36.414188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.158 [2024-10-30 10:47:36.417146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.158 [2024-10-30 10:47:36.417197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:15.158 BaseBdev1 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 BaseBdev2_malloc 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 [2024-10-30 10:47:36.466859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:15.158 [2024-10-30 10:47:36.467085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.158 [2024-10-30 10:47:36.467125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:15.158 [2024-10-30 10:47:36.467147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.158 [2024-10-30 10:47:36.469932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.158 [2024-10-30 10:47:36.470110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:15.158 BaseBdev2 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 BaseBdev3_malloc 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 [2024-10-30 10:47:36.535488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:15.158 [2024-10-30 10:47:36.535697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.158 [2024-10-30 10:47:36.535741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:15.158 [2024-10-30 10:47:36.535764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.158 [2024-10-30 10:47:36.538601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.158 [2024-10-30 10:47:36.538653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:15.158 BaseBdev3 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 BaseBdev4_malloc 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 [2024-10-30 10:47:36.591686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:15.158 [2024-10-30 10:47:36.591896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.158 [2024-10-30 10:47:36.591970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:15.158 [2024-10-30 10:47:36.592193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.158 [2024-10-30 10:47:36.595003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.158 [2024-10-30 10:47:36.595053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:15.158 BaseBdev4 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.158 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.418 spare_malloc 00:19:15.418 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.419 spare_delay 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.419 [2024-10-30 10:47:36.656816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:15.419 [2024-10-30 10:47:36.657025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.419 [2024-10-30 10:47:36.657100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:15.419 [2024-10-30 10:47:36.657205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.419 [2024-10-30 10:47:36.660150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.419 [2024-10-30 10:47:36.660216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:15.419 spare 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.419 [2024-10-30 10:47:36.664908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.419 [2024-10-30 10:47:36.667597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.419 [2024-10-30 10:47:36.667892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.419 [2024-10-30 10:47:36.668113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:15.419 [2024-10-30 10:47:36.668495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:15.419 [2024-10-30 10:47:36.668662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:15.419 [2024-10-30 10:47:36.669053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:15.419 [2024-10-30 10:47:36.669413] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:15.419 [2024-10-30 10:47:36.669539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:15.419 [2024-10-30 10:47:36.669812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.419 "name": "raid_bdev1", 00:19:15.419 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:15.419 "strip_size_kb": 0, 00:19:15.419 "state": "online", 00:19:15.419 "raid_level": "raid1", 00:19:15.419 "superblock": true, 00:19:15.419 "num_base_bdevs": 4, 00:19:15.419 "num_base_bdevs_discovered": 4, 00:19:15.419 "num_base_bdevs_operational": 4, 00:19:15.419 "base_bdevs_list": [ 00:19:15.419 { 00:19:15.419 "name": "BaseBdev1", 00:19:15.419 "uuid": "3abf384b-fa83-5bfb-be05-99367ec4d090", 00:19:15.419 "is_configured": true, 00:19:15.419 "data_offset": 2048, 00:19:15.419 "data_size": 63488 00:19:15.419 }, 00:19:15.419 { 00:19:15.419 "name": "BaseBdev2", 00:19:15.419 "uuid": "6e00b2d7-b206-5440-b95d-61157eb084ec", 00:19:15.419 "is_configured": true, 00:19:15.419 "data_offset": 2048, 00:19:15.419 "data_size": 63488 00:19:15.419 }, 00:19:15.419 { 00:19:15.419 "name": "BaseBdev3", 00:19:15.419 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:15.419 "is_configured": true, 00:19:15.419 "data_offset": 2048, 00:19:15.419 "data_size": 63488 00:19:15.419 }, 00:19:15.419 { 00:19:15.419 "name": "BaseBdev4", 00:19:15.419 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:15.419 "is_configured": true, 00:19:15.419 "data_offset": 2048, 00:19:15.419 "data_size": 63488 00:19:15.419 } 00:19:15.419 ] 00:19:15.419 }' 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.419 10:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.755 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:15.755 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:15.755 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.755 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:15.755 [2024-10-30 10:47:37.198397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.083 [2024-10-30 10:47:37.293944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.083 "name": "raid_bdev1", 00:19:16.083 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:16.083 "strip_size_kb": 0, 00:19:16.083 "state": "online", 00:19:16.083 "raid_level": "raid1", 00:19:16.083 "superblock": true, 00:19:16.083 "num_base_bdevs": 4, 00:19:16.083 "num_base_bdevs_discovered": 3, 00:19:16.083 "num_base_bdevs_operational": 3, 00:19:16.083 "base_bdevs_list": [ 00:19:16.083 { 00:19:16.083 "name": null, 00:19:16.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.083 "is_configured": false, 00:19:16.083 "data_offset": 0, 00:19:16.083 "data_size": 63488 00:19:16.083 }, 00:19:16.083 { 00:19:16.083 "name": "BaseBdev2", 00:19:16.083 "uuid": "6e00b2d7-b206-5440-b95d-61157eb084ec", 00:19:16.083 "is_configured": true, 00:19:16.083 "data_offset": 2048, 00:19:16.083 "data_size": 63488 00:19:16.083 }, 00:19:16.083 { 00:19:16.083 "name": "BaseBdev3", 00:19:16.083 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:16.083 "is_configured": true, 00:19:16.083 "data_offset": 2048, 00:19:16.083 "data_size": 63488 00:19:16.083 }, 00:19:16.083 { 00:19:16.083 "name": "BaseBdev4", 00:19:16.083 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:16.083 "is_configured": true, 00:19:16.083 "data_offset": 2048, 00:19:16.083 "data_size": 63488 00:19:16.083 } 00:19:16.083 ] 00:19:16.083 }' 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.083 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.083 [2024-10-30 10:47:37.426171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:16.083 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:16.083 Zero copy mechanism will not be used. 00:19:16.083 Running I/O for 60 seconds... 00:19:16.343 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:16.343 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.343 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:16.343 [2024-10-30 10:47:37.807559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:16.603 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.603 10:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:16.603 [2024-10-30 10:47:37.866508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:19:16.603 [2024-10-30 10:47:37.869308] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:16.603 [2024-10-30 10:47:37.996622] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:16.603 [2024-10-30 10:47:37.998419] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:16.861 [2024-10-30 10:47:38.226814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:16.861 [2024-10-30 10:47:38.227384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:17.119 141.00 IOPS, 423.00 MiB/s [2024-10-30T10:47:38.589Z] [2024-10-30 10:47:38.563297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:17.376 [2024-10-30 10:47:38.798479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.635 "name": "raid_bdev1", 00:19:17.635 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:17.635 "strip_size_kb": 0, 00:19:17.635 "state": "online", 00:19:17.635 "raid_level": "raid1", 00:19:17.635 "superblock": true, 00:19:17.635 "num_base_bdevs": 4, 00:19:17.635 "num_base_bdevs_discovered": 4, 00:19:17.635 "num_base_bdevs_operational": 4, 00:19:17.635 "process": { 00:19:17.635 "type": "rebuild", 00:19:17.635 "target": "spare", 00:19:17.635 "progress": { 00:19:17.635 "blocks": 10240, 00:19:17.635 "percent": 16 00:19:17.635 } 00:19:17.635 }, 00:19:17.635 "base_bdevs_list": [ 00:19:17.635 { 00:19:17.635 "name": "spare", 00:19:17.635 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:17.635 "is_configured": true, 00:19:17.635 "data_offset": 2048, 00:19:17.635 "data_size": 63488 00:19:17.635 }, 00:19:17.635 { 00:19:17.635 "name": "BaseBdev2", 00:19:17.635 "uuid": "6e00b2d7-b206-5440-b95d-61157eb084ec", 00:19:17.635 "is_configured": true, 00:19:17.635 "data_offset": 2048, 00:19:17.635 "data_size": 63488 00:19:17.635 }, 00:19:17.635 { 00:19:17.635 "name": "BaseBdev3", 00:19:17.635 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:17.635 "is_configured": true, 00:19:17.635 "data_offset": 2048, 00:19:17.635 "data_size": 63488 00:19:17.635 }, 00:19:17.635 { 00:19:17.635 "name": "BaseBdev4", 00:19:17.635 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:17.635 "is_configured": true, 00:19:17.635 "data_offset": 2048, 00:19:17.635 "data_size": 63488 00:19:17.635 } 00:19:17.635 ] 00:19:17.635 }' 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.635 10:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.635 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.635 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:17.635 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.635 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.635 [2024-10-30 10:47:39.023688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.635 [2024-10-30 10:47:39.038365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:17.895 [2024-10-30 10:47:39.149523] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:17.895 [2024-10-30 10:47:39.163103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.895 [2024-10-30 10:47:39.163200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:17.895 [2024-10-30 10:47:39.163221] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:17.895 [2024-10-30 10:47:39.204676] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.895 "name": "raid_bdev1", 00:19:17.895 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:17.895 "strip_size_kb": 0, 00:19:17.895 "state": "online", 00:19:17.895 "raid_level": "raid1", 00:19:17.895 "superblock": true, 00:19:17.895 "num_base_bdevs": 4, 00:19:17.895 "num_base_bdevs_discovered": 3, 00:19:17.895 "num_base_bdevs_operational": 3, 00:19:17.895 "base_bdevs_list": [ 00:19:17.895 { 00:19:17.895 "name": null, 00:19:17.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.895 "is_configured": false, 00:19:17.895 "data_offset": 0, 00:19:17.895 "data_size": 63488 00:19:17.895 }, 00:19:17.895 { 00:19:17.895 "name": "BaseBdev2", 00:19:17.895 "uuid": "6e00b2d7-b206-5440-b95d-61157eb084ec", 00:19:17.895 "is_configured": true, 00:19:17.895 "data_offset": 2048, 00:19:17.895 "data_size": 63488 00:19:17.895 }, 00:19:17.895 { 00:19:17.895 "name": "BaseBdev3", 00:19:17.895 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:17.895 "is_configured": true, 00:19:17.895 "data_offset": 2048, 00:19:17.895 "data_size": 63488 00:19:17.895 }, 00:19:17.895 { 00:19:17.895 "name": "BaseBdev4", 00:19:17.895 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:17.895 "is_configured": true, 00:19:17.895 "data_offset": 2048, 00:19:17.895 "data_size": 63488 00:19:17.895 } 00:19:17.895 ] 00:19:17.895 }' 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.895 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.417 118.00 IOPS, 354.00 MiB/s [2024-10-30T10:47:39.887Z] 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.417 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.417 "name": "raid_bdev1", 00:19:18.417 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:18.417 "strip_size_kb": 0, 00:19:18.417 "state": "online", 00:19:18.417 "raid_level": "raid1", 00:19:18.417 "superblock": true, 00:19:18.417 "num_base_bdevs": 4, 00:19:18.417 "num_base_bdevs_discovered": 3, 00:19:18.417 "num_base_bdevs_operational": 3, 00:19:18.417 "base_bdevs_list": [ 00:19:18.417 { 00:19:18.417 "name": null, 00:19:18.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.417 "is_configured": false, 00:19:18.417 "data_offset": 0, 00:19:18.417 "data_size": 63488 00:19:18.417 }, 00:19:18.417 { 00:19:18.417 "name": "BaseBdev2", 00:19:18.417 "uuid": "6e00b2d7-b206-5440-b95d-61157eb084ec", 00:19:18.417 "is_configured": true, 00:19:18.417 "data_offset": 2048, 00:19:18.417 "data_size": 63488 00:19:18.417 }, 00:19:18.417 { 00:19:18.417 "name": "BaseBdev3", 00:19:18.417 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:18.417 "is_configured": true, 00:19:18.418 "data_offset": 2048, 00:19:18.418 "data_size": 63488 00:19:18.418 }, 00:19:18.418 { 00:19:18.418 "name": "BaseBdev4", 00:19:18.418 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:18.418 "is_configured": true, 00:19:18.418 "data_offset": 2048, 00:19:18.418 "data_size": 63488 00:19:18.418 } 00:19:18.418 ] 00:19:18.418 }' 00:19:18.418 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.418 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.418 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.679 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.679 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.679 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.679 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:18.679 [2024-10-30 10:47:39.945426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.679 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.679 10:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:18.679 [2024-10-30 10:47:39.993392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:18.679 [2024-10-30 10:47:39.996164] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:18.679 [2024-10-30 10:47:40.107937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:18.679 [2024-10-30 10:47:40.108908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:18.938 [2024-10-30 10:47:40.312596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:18.938 [2024-10-30 10:47:40.313073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:19.197 131.67 IOPS, 395.00 MiB/s [2024-10-30T10:47:40.667Z] [2024-10-30 10:47:40.587784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:19.765 [2024-10-30 10:47:40.942099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:19.765 10:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.765 10:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.765 10:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.765 10:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.765 10:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.765 10:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.765 10:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.765 10:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.765 10:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.765 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.765 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.765 "name": "raid_bdev1", 00:19:19.765 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:19.765 "strip_size_kb": 0, 00:19:19.765 "state": "online", 00:19:19.765 "raid_level": "raid1", 00:19:19.765 "superblock": true, 00:19:19.766 "num_base_bdevs": 4, 00:19:19.766 "num_base_bdevs_discovered": 4, 00:19:19.766 "num_base_bdevs_operational": 4, 00:19:19.766 "process": { 00:19:19.766 "type": "rebuild", 00:19:19.766 "target": "spare", 00:19:19.766 "progress": { 00:19:19.766 "blocks": 14336, 00:19:19.766 "percent": 22 00:19:19.766 } 00:19:19.766 }, 00:19:19.766 "base_bdevs_list": [ 00:19:19.766 { 00:19:19.766 "name": "spare", 00:19:19.766 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:19.766 "is_configured": true, 00:19:19.766 "data_offset": 2048, 00:19:19.766 "data_size": 63488 00:19:19.766 }, 00:19:19.766 { 00:19:19.766 "name": "BaseBdev2", 00:19:19.766 "uuid": "6e00b2d7-b206-5440-b95d-61157eb084ec", 00:19:19.766 "is_configured": true, 00:19:19.766 "data_offset": 2048, 00:19:19.766 "data_size": 63488 00:19:19.766 }, 00:19:19.766 { 00:19:19.766 "name": "BaseBdev3", 00:19:19.766 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:19.766 "is_configured": true, 00:19:19.766 "data_offset": 2048, 00:19:19.766 "data_size": 63488 00:19:19.766 }, 00:19:19.766 { 00:19:19.766 "name": "BaseBdev4", 00:19:19.766 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:19.766 "is_configured": true, 00:19:19.766 "data_offset": 2048, 00:19:19.766 "data_size": 63488 00:19:19.766 } 00:19:19.766 ] 00:19:19.766 }' 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:19.766 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.766 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.766 [2024-10-30 10:47:41.164811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:19.766 [2024-10-30 10:47:41.190017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:19.766 [2024-10-30 10:47:41.190994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:20.025 [2024-10-30 10:47:41.402442] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:19:20.025 [2024-10-30 10:47:41.402882] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.025 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.025 119.00 IOPS, 357.00 MiB/s [2024-10-30T10:47:41.495Z] 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.025 "name": "raid_bdev1", 00:19:20.025 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:20.025 "strip_size_kb": 0, 00:19:20.025 "state": "online", 00:19:20.025 "raid_level": "raid1", 00:19:20.025 "superblock": true, 00:19:20.025 "num_base_bdevs": 4, 00:19:20.025 "num_base_bdevs_discovered": 3, 00:19:20.025 "num_base_bdevs_operational": 3, 00:19:20.025 "process": { 00:19:20.025 "type": "rebuild", 00:19:20.025 "target": "spare", 00:19:20.025 "progress": { 00:19:20.025 "blocks": 16384, 00:19:20.025 "percent": 25 00:19:20.025 } 00:19:20.025 }, 00:19:20.025 "base_bdevs_list": [ 00:19:20.025 { 00:19:20.025 "name": "spare", 00:19:20.026 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:20.026 "is_configured": true, 00:19:20.026 "data_offset": 2048, 00:19:20.026 "data_size": 63488 00:19:20.026 }, 00:19:20.026 { 00:19:20.026 "name": null, 00:19:20.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.026 "is_configured": false, 00:19:20.026 "data_offset": 0, 00:19:20.026 "data_size": 63488 00:19:20.026 }, 00:19:20.026 { 00:19:20.026 "name": "BaseBdev3", 00:19:20.026 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:20.026 "is_configured": true, 00:19:20.026 "data_offset": 2048, 00:19:20.026 "data_size": 63488 00:19:20.026 }, 00:19:20.026 { 00:19:20.026 "name": "BaseBdev4", 00:19:20.026 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:20.026 "is_configured": true, 00:19:20.026 "data_offset": 2048, 00:19:20.026 "data_size": 63488 00:19:20.026 } 00:19:20.026 ] 00:19:20.026 }' 00:19:20.026 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.284 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.284 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.284 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.284 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=535 00:19:20.284 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:20.284 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:20.284 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.285 "name": "raid_bdev1", 00:19:20.285 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:20.285 "strip_size_kb": 0, 00:19:20.285 "state": "online", 00:19:20.285 "raid_level": "raid1", 00:19:20.285 "superblock": true, 00:19:20.285 "num_base_bdevs": 4, 00:19:20.285 "num_base_bdevs_discovered": 3, 00:19:20.285 "num_base_bdevs_operational": 3, 00:19:20.285 "process": { 00:19:20.285 "type": "rebuild", 00:19:20.285 "target": "spare", 00:19:20.285 "progress": { 00:19:20.285 "blocks": 18432, 00:19:20.285 "percent": 29 00:19:20.285 } 00:19:20.285 }, 00:19:20.285 "base_bdevs_list": [ 00:19:20.285 { 00:19:20.285 "name": "spare", 00:19:20.285 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:20.285 "is_configured": true, 00:19:20.285 "data_offset": 2048, 00:19:20.285 "data_size": 63488 00:19:20.285 }, 00:19:20.285 { 00:19:20.285 "name": null, 00:19:20.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.285 "is_configured": false, 00:19:20.285 "data_offset": 0, 00:19:20.285 "data_size": 63488 00:19:20.285 }, 00:19:20.285 { 00:19:20.285 "name": "BaseBdev3", 00:19:20.285 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:20.285 "is_configured": true, 00:19:20.285 "data_offset": 2048, 00:19:20.285 "data_size": 63488 00:19:20.285 }, 00:19:20.285 { 00:19:20.285 "name": "BaseBdev4", 00:19:20.285 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:20.285 "is_configured": true, 00:19:20.285 "data_offset": 2048, 00:19:20.285 "data_size": 63488 00:19:20.285 } 00:19:20.285 ] 00:19:20.285 }' 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.285 10:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:20.285 [2024-10-30 10:47:41.737824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:20.543 [2024-10-30 10:47:41.969789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:20.802 [2024-10-30 10:47:42.079174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:21.319 110.00 IOPS, 330.00 MiB/s [2024-10-30T10:47:42.789Z] [2024-10-30 10:47:42.535370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.319 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.319 "name": "raid_bdev1", 00:19:21.319 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:21.319 "strip_size_kb": 0, 00:19:21.319 "state": "online", 00:19:21.319 "raid_level": "raid1", 00:19:21.319 "superblock": true, 00:19:21.319 "num_base_bdevs": 4, 00:19:21.319 "num_base_bdevs_discovered": 3, 00:19:21.319 "num_base_bdevs_operational": 3, 00:19:21.319 "process": { 00:19:21.319 "type": "rebuild", 00:19:21.319 "target": "spare", 00:19:21.319 "progress": { 00:19:21.319 "blocks": 36864, 00:19:21.319 "percent": 58 00:19:21.319 } 00:19:21.319 }, 00:19:21.319 "base_bdevs_list": [ 00:19:21.319 { 00:19:21.319 "name": "spare", 00:19:21.319 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:21.319 "is_configured": true, 00:19:21.319 "data_offset": 2048, 00:19:21.319 "data_size": 63488 00:19:21.319 }, 00:19:21.319 { 00:19:21.319 "name": null, 00:19:21.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.319 "is_configured": false, 00:19:21.319 "data_offset": 0, 00:19:21.319 "data_size": 63488 00:19:21.319 }, 00:19:21.319 { 00:19:21.319 "name": "BaseBdev3", 00:19:21.319 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:21.319 "is_configured": true, 00:19:21.319 "data_offset": 2048, 00:19:21.319 "data_size": 63488 00:19:21.319 }, 00:19:21.320 { 00:19:21.320 "name": "BaseBdev4", 00:19:21.320 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:21.320 "is_configured": true, 00:19:21.320 "data_offset": 2048, 00:19:21.320 "data_size": 63488 00:19:21.320 } 00:19:21.320 ] 00:19:21.320 }' 00:19:21.320 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.658 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.658 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.658 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.658 10:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:21.658 [2024-10-30 10:47:43.078219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:21.940 [2024-10-30 10:47:43.288533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:19:22.457 98.67 IOPS, 296.00 MiB/s [2024-10-30T10:47:43.927Z] 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:22.457 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.458 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.458 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.458 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.458 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.458 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.458 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.458 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.458 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:22.716 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.716 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.716 "name": "raid_bdev1", 00:19:22.716 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:22.716 "strip_size_kb": 0, 00:19:22.716 "state": "online", 00:19:22.716 "raid_level": "raid1", 00:19:22.716 "superblock": true, 00:19:22.716 "num_base_bdevs": 4, 00:19:22.716 "num_base_bdevs_discovered": 3, 00:19:22.716 "num_base_bdevs_operational": 3, 00:19:22.716 "process": { 00:19:22.716 "type": "rebuild", 00:19:22.716 "target": "spare", 00:19:22.716 "progress": { 00:19:22.716 "blocks": 55296, 00:19:22.716 "percent": 87 00:19:22.716 } 00:19:22.716 }, 00:19:22.716 "base_bdevs_list": [ 00:19:22.716 { 00:19:22.716 "name": "spare", 00:19:22.716 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:22.716 "is_configured": true, 00:19:22.716 "data_offset": 2048, 00:19:22.716 "data_size": 63488 00:19:22.716 }, 00:19:22.716 { 00:19:22.716 "name": null, 00:19:22.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.716 "is_configured": false, 00:19:22.716 "data_offset": 0, 00:19:22.716 "data_size": 63488 00:19:22.716 }, 00:19:22.716 { 00:19:22.716 "name": "BaseBdev3", 00:19:22.716 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:22.716 "is_configured": true, 00:19:22.716 "data_offset": 2048, 00:19:22.716 "data_size": 63488 00:19:22.716 }, 00:19:22.716 { 00:19:22.716 "name": "BaseBdev4", 00:19:22.716 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:22.716 "is_configured": true, 00:19:22.716 "data_offset": 2048, 00:19:22.716 "data_size": 63488 00:19:22.716 } 00:19:22.716 ] 00:19:22.716 }' 00:19:22.716 10:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.716 10:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.716 10:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.716 [2024-10-30 10:47:44.060252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:22.716 10:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.716 10:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:22.975 [2024-10-30 10:47:44.403141] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:23.233 90.14 IOPS, 270.43 MiB/s [2024-10-30T10:47:44.703Z] [2024-10-30 10:47:44.510630] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:23.233 [2024-10-30 10:47:44.513615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.801 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:23.801 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.801 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.801 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.801 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.801 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.801 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.801 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.802 "name": "raid_bdev1", 00:19:23.802 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:23.802 "strip_size_kb": 0, 00:19:23.802 "state": "online", 00:19:23.802 "raid_level": "raid1", 00:19:23.802 "superblock": true, 00:19:23.802 "num_base_bdevs": 4, 00:19:23.802 "num_base_bdevs_discovered": 3, 00:19:23.802 "num_base_bdevs_operational": 3, 00:19:23.802 "base_bdevs_list": [ 00:19:23.802 { 00:19:23.802 "name": "spare", 00:19:23.802 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:23.802 "is_configured": true, 00:19:23.802 "data_offset": 2048, 00:19:23.802 "data_size": 63488 00:19:23.802 }, 00:19:23.802 { 00:19:23.802 "name": null, 00:19:23.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.802 "is_configured": false, 00:19:23.802 "data_offset": 0, 00:19:23.802 "data_size": 63488 00:19:23.802 }, 00:19:23.802 { 00:19:23.802 "name": "BaseBdev3", 00:19:23.802 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:23.802 "is_configured": true, 00:19:23.802 "data_offset": 2048, 00:19:23.802 "data_size": 63488 00:19:23.802 }, 00:19:23.802 { 00:19:23.802 "name": "BaseBdev4", 00:19:23.802 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:23.802 "is_configured": true, 00:19:23.802 "data_offset": 2048, 00:19:23.802 "data_size": 63488 00:19:23.802 } 00:19:23.802 ] 00:19:23.802 }' 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:23.802 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.061 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.061 "name": "raid_bdev1", 00:19:24.061 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:24.061 "strip_size_kb": 0, 00:19:24.061 "state": "online", 00:19:24.061 "raid_level": "raid1", 00:19:24.061 "superblock": true, 00:19:24.061 "num_base_bdevs": 4, 00:19:24.061 "num_base_bdevs_discovered": 3, 00:19:24.061 "num_base_bdevs_operational": 3, 00:19:24.061 "base_bdevs_list": [ 00:19:24.061 { 00:19:24.061 "name": "spare", 00:19:24.061 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:24.061 "is_configured": true, 00:19:24.061 "data_offset": 2048, 00:19:24.061 "data_size": 63488 00:19:24.061 }, 00:19:24.061 { 00:19:24.061 "name": null, 00:19:24.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.061 "is_configured": false, 00:19:24.061 "data_offset": 0, 00:19:24.061 "data_size": 63488 00:19:24.061 }, 00:19:24.061 { 00:19:24.061 "name": "BaseBdev3", 00:19:24.061 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:24.061 "is_configured": true, 00:19:24.061 "data_offset": 2048, 00:19:24.061 "data_size": 63488 00:19:24.083 }, 00:19:24.083 { 00:19:24.083 "name": "BaseBdev4", 00:19:24.083 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:24.083 "is_configured": true, 00:19:24.083 "data_offset": 2048, 00:19:24.083 "data_size": 63488 00:19:24.083 } 00:19:24.083 ] 00:19:24.083 }' 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.084 "name": "raid_bdev1", 00:19:24.084 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:24.084 "strip_size_kb": 0, 00:19:24.084 "state": "online", 00:19:24.084 "raid_level": "raid1", 00:19:24.084 "superblock": true, 00:19:24.084 "num_base_bdevs": 4, 00:19:24.084 "num_base_bdevs_discovered": 3, 00:19:24.084 "num_base_bdevs_operational": 3, 00:19:24.084 "base_bdevs_list": [ 00:19:24.084 { 00:19:24.084 "name": "spare", 00:19:24.084 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:24.084 "is_configured": true, 00:19:24.084 "data_offset": 2048, 00:19:24.084 "data_size": 63488 00:19:24.084 }, 00:19:24.084 { 00:19:24.084 "name": null, 00:19:24.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.084 "is_configured": false, 00:19:24.084 "data_offset": 0, 00:19:24.084 "data_size": 63488 00:19:24.084 }, 00:19:24.084 { 00:19:24.084 "name": "BaseBdev3", 00:19:24.084 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:24.084 "is_configured": true, 00:19:24.084 "data_offset": 2048, 00:19:24.084 "data_size": 63488 00:19:24.084 }, 00:19:24.084 { 00:19:24.084 "name": "BaseBdev4", 00:19:24.084 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:24.084 "is_configured": true, 00:19:24.084 "data_offset": 2048, 00:19:24.084 "data_size": 63488 00:19:24.084 } 00:19:24.084 ] 00:19:24.084 }' 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.084 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.651 82.75 IOPS, 248.25 MiB/s [2024-10-30T10:47:46.121Z] 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:24.651 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.651 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.651 [2024-10-30 10:47:45.887695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.651 [2024-10-30 10:47:45.887881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:24.651 00:19:24.651 Latency(us) 00:19:24.651 [2024-10-30T10:47:46.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.651 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:24.651 raid_bdev1 : 8.53 79.01 237.02 0.00 0.00 16947.84 288.58 119632.99 00:19:24.651 [2024-10-30T10:47:46.121Z] =================================================================================================================== 00:19:24.651 [2024-10-30T10:47:46.121Z] Total : 79.01 237.02 0.00 0.00 16947.84 288.58 119632.99 00:19:24.651 [2024-10-30 10:47:45.979785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.651 [2024-10-30 10:47:45.980006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.651 [2024-10-30 10:47:45.980181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.651 [2024-10-30 10:47:45.980329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:24.651 { 00:19:24.651 "results": [ 00:19:24.651 { 00:19:24.651 "job": "raid_bdev1", 00:19:24.651 "core_mask": "0x1", 00:19:24.652 "workload": "randrw", 00:19:24.652 "percentage": 50, 00:19:24.652 "status": "finished", 00:19:24.652 "queue_depth": 2, 00:19:24.652 "io_size": 3145728, 00:19:24.652 "runtime": 8.530759, 00:19:24.652 "iops": 79.00821017215468, 00:19:24.652 "mibps": 237.02463051646401, 00:19:24.652 "io_failed": 0, 00:19:24.652 "io_timeout": 0, 00:19:24.652 "avg_latency_us": 16947.840863231726, 00:19:24.652 "min_latency_us": 288.58181818181816, 00:19:24.652 "max_latency_us": 119632.98909090909 00:19:24.652 } 00:19:24.652 ], 00:19:24.652 "core_count": 1 00:19:24.652 } 00:19:24.652 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.652 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.652 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:19:24.652 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.652 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:24.652 10:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:24.652 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:19:24.911 /dev/nbd0 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.911 1+0 records in 00:19:24.911 1+0 records out 00:19:24.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657856 s, 6.2 MB/s 00:19:24.911 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.171 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:19:25.429 /dev/nbd1 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:25.429 1+0 records in 00:19:25.429 1+0 records out 00:19:25.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300132 s, 13.6 MB/s 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:19:25.429 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:25.430 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.430 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:25.689 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:25.689 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:25.689 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:25.689 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:25.689 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:25.689 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.689 10:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:25.948 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:19:26.207 /dev/nbd1 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.207 1+0 records in 00:19:26.207 1+0 records out 00:19:26.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355714 s, 11.5 MB/s 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.207 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:26.466 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:26.466 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:26.466 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:26.466 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.466 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.466 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:26.725 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:26.725 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.725 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:26.725 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:26.725 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:26.725 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:26.725 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:19:26.725 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:26.725 10:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:26.984 [2024-10-30 10:47:48.239992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:26.984 [2024-10-30 10:47:48.240064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.984 [2024-10-30 10:47:48.240094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:26.984 [2024-10-30 10:47:48.240113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.984 [2024-10-30 10:47:48.242961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.984 [2024-10-30 10:47:48.243031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:26.984 [2024-10-30 10:47:48.243148] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:26.984 [2024-10-30 10:47:48.243231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.984 [2024-10-30 10:47:48.243421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:26.984 [2024-10-30 10:47:48.243564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:26.984 spare 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:26.984 [2024-10-30 10:47:48.343693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:26.984 [2024-10-30 10:47:48.343760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:26.984 [2024-10-30 10:47:48.344195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:19:26.984 [2024-10-30 10:47:48.344443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:26.984 [2024-10-30 10:47:48.344459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:26.984 [2024-10-30 10:47:48.344727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.984 "name": "raid_bdev1", 00:19:26.984 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:26.984 "strip_size_kb": 0, 00:19:26.984 "state": "online", 00:19:26.984 "raid_level": "raid1", 00:19:26.984 "superblock": true, 00:19:26.984 "num_base_bdevs": 4, 00:19:26.984 "num_base_bdevs_discovered": 3, 00:19:26.984 "num_base_bdevs_operational": 3, 00:19:26.984 "base_bdevs_list": [ 00:19:26.984 { 00:19:26.984 "name": "spare", 00:19:26.984 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:26.984 "is_configured": true, 00:19:26.984 "data_offset": 2048, 00:19:26.984 "data_size": 63488 00:19:26.984 }, 00:19:26.984 { 00:19:26.984 "name": null, 00:19:26.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.984 "is_configured": false, 00:19:26.984 "data_offset": 2048, 00:19:26.984 "data_size": 63488 00:19:26.984 }, 00:19:26.984 { 00:19:26.984 "name": "BaseBdev3", 00:19:26.984 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:26.984 "is_configured": true, 00:19:26.984 "data_offset": 2048, 00:19:26.984 "data_size": 63488 00:19:26.984 }, 00:19:26.984 { 00:19:26.984 "name": "BaseBdev4", 00:19:26.984 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:26.984 "is_configured": true, 00:19:26.984 "data_offset": 2048, 00:19:26.984 "data_size": 63488 00:19:26.984 } 00:19:26.984 ] 00:19:26.984 }' 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.984 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.611 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.612 "name": "raid_bdev1", 00:19:27.612 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:27.612 "strip_size_kb": 0, 00:19:27.612 "state": "online", 00:19:27.612 "raid_level": "raid1", 00:19:27.612 "superblock": true, 00:19:27.612 "num_base_bdevs": 4, 00:19:27.612 "num_base_bdevs_discovered": 3, 00:19:27.612 "num_base_bdevs_operational": 3, 00:19:27.612 "base_bdevs_list": [ 00:19:27.612 { 00:19:27.612 "name": "spare", 00:19:27.612 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:27.612 "is_configured": true, 00:19:27.612 "data_offset": 2048, 00:19:27.612 "data_size": 63488 00:19:27.612 }, 00:19:27.612 { 00:19:27.612 "name": null, 00:19:27.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.612 "is_configured": false, 00:19:27.612 "data_offset": 2048, 00:19:27.612 "data_size": 63488 00:19:27.612 }, 00:19:27.612 { 00:19:27.612 "name": "BaseBdev3", 00:19:27.612 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:27.612 "is_configured": true, 00:19:27.612 "data_offset": 2048, 00:19:27.612 "data_size": 63488 00:19:27.612 }, 00:19:27.612 { 00:19:27.612 "name": "BaseBdev4", 00:19:27.612 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:27.612 "is_configured": true, 00:19:27.612 "data_offset": 2048, 00:19:27.612 "data_size": 63488 00:19:27.612 } 00:19:27.612 ] 00:19:27.612 }' 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:27.612 10:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.612 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.612 [2024-10-30 10:47:49.077068] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.870 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.870 "name": "raid_bdev1", 00:19:27.870 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:27.870 "strip_size_kb": 0, 00:19:27.870 "state": "online", 00:19:27.870 "raid_level": "raid1", 00:19:27.870 "superblock": true, 00:19:27.870 "num_base_bdevs": 4, 00:19:27.870 "num_base_bdevs_discovered": 2, 00:19:27.870 "num_base_bdevs_operational": 2, 00:19:27.870 "base_bdevs_list": [ 00:19:27.870 { 00:19:27.870 "name": null, 00:19:27.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.870 "is_configured": false, 00:19:27.871 "data_offset": 0, 00:19:27.871 "data_size": 63488 00:19:27.871 }, 00:19:27.871 { 00:19:27.871 "name": null, 00:19:27.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.871 "is_configured": false, 00:19:27.871 "data_offset": 2048, 00:19:27.871 "data_size": 63488 00:19:27.871 }, 00:19:27.871 { 00:19:27.871 "name": "BaseBdev3", 00:19:27.871 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:27.871 "is_configured": true, 00:19:27.871 "data_offset": 2048, 00:19:27.871 "data_size": 63488 00:19:27.871 }, 00:19:27.871 { 00:19:27.871 "name": "BaseBdev4", 00:19:27.871 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:27.871 "is_configured": true, 00:19:27.871 "data_offset": 2048, 00:19:27.871 "data_size": 63488 00:19:27.871 } 00:19:27.871 ] 00:19:27.871 }' 00:19:27.871 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.871 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:28.129 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:28.129 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.129 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:28.129 [2024-10-30 10:47:49.593267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:28.129 [2024-10-30 10:47:49.593508] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:28.129 [2024-10-30 10:47:49.593537] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:28.129 [2024-10-30 10:47:49.593589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:28.386 [2024-10-30 10:47:49.607318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:19:28.386 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.386 10:47:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:28.386 [2024-10-30 10:47:49.609760] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.322 "name": "raid_bdev1", 00:19:29.322 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:29.322 "strip_size_kb": 0, 00:19:29.322 "state": "online", 00:19:29.322 "raid_level": "raid1", 00:19:29.322 "superblock": true, 00:19:29.322 "num_base_bdevs": 4, 00:19:29.322 "num_base_bdevs_discovered": 3, 00:19:29.322 "num_base_bdevs_operational": 3, 00:19:29.322 "process": { 00:19:29.322 "type": "rebuild", 00:19:29.322 "target": "spare", 00:19:29.322 "progress": { 00:19:29.322 "blocks": 20480, 00:19:29.322 "percent": 32 00:19:29.322 } 00:19:29.322 }, 00:19:29.322 "base_bdevs_list": [ 00:19:29.322 { 00:19:29.322 "name": "spare", 00:19:29.322 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:29.322 "is_configured": true, 00:19:29.322 "data_offset": 2048, 00:19:29.322 "data_size": 63488 00:19:29.322 }, 00:19:29.322 { 00:19:29.322 "name": null, 00:19:29.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.322 "is_configured": false, 00:19:29.322 "data_offset": 2048, 00:19:29.322 "data_size": 63488 00:19:29.322 }, 00:19:29.322 { 00:19:29.322 "name": "BaseBdev3", 00:19:29.322 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:29.322 "is_configured": true, 00:19:29.322 "data_offset": 2048, 00:19:29.322 "data_size": 63488 00:19:29.322 }, 00:19:29.322 { 00:19:29.322 "name": "BaseBdev4", 00:19:29.322 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:29.322 "is_configured": true, 00:19:29.322 "data_offset": 2048, 00:19:29.322 "data_size": 63488 00:19:29.322 } 00:19:29.322 ] 00:19:29.322 }' 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.322 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.322 [2024-10-30 10:47:50.771368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:29.581 [2024-10-30 10:47:50.818511] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:29.581 [2024-10-30 10:47:50.818738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.581 [2024-10-30 10:47:50.818768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:29.581 [2024-10-30 10:47:50.818784] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.581 "name": "raid_bdev1", 00:19:29.581 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:29.581 "strip_size_kb": 0, 00:19:29.581 "state": "online", 00:19:29.581 "raid_level": "raid1", 00:19:29.581 "superblock": true, 00:19:29.581 "num_base_bdevs": 4, 00:19:29.581 "num_base_bdevs_discovered": 2, 00:19:29.581 "num_base_bdevs_operational": 2, 00:19:29.581 "base_bdevs_list": [ 00:19:29.581 { 00:19:29.581 "name": null, 00:19:29.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.581 "is_configured": false, 00:19:29.581 "data_offset": 0, 00:19:29.581 "data_size": 63488 00:19:29.581 }, 00:19:29.581 { 00:19:29.581 "name": null, 00:19:29.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.581 "is_configured": false, 00:19:29.581 "data_offset": 2048, 00:19:29.581 "data_size": 63488 00:19:29.581 }, 00:19:29.581 { 00:19:29.581 "name": "BaseBdev3", 00:19:29.581 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:29.581 "is_configured": true, 00:19:29.581 "data_offset": 2048, 00:19:29.581 "data_size": 63488 00:19:29.581 }, 00:19:29.581 { 00:19:29.581 "name": "BaseBdev4", 00:19:29.581 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:29.581 "is_configured": true, 00:19:29.581 "data_offset": 2048, 00:19:29.581 "data_size": 63488 00:19:29.581 } 00:19:29.581 ] 00:19:29.581 }' 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.581 10:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.150 10:47:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:30.150 10:47:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.150 10:47:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.150 [2024-10-30 10:47:51.381937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:30.150 [2024-10-30 10:47:51.382192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.150 [2024-10-30 10:47:51.382271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:30.150 [2024-10-30 10:47:51.382385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.150 [2024-10-30 10:47:51.383041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.150 [2024-10-30 10:47:51.383181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:30.150 [2024-10-30 10:47:51.383416] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:30.150 [2024-10-30 10:47:51.383451] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:30.150 [2024-10-30 10:47:51.383466] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:30.150 [2024-10-30 10:47:51.383499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.150 spare 00:19:30.150 [2024-10-30 10:47:51.397636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:19:30.150 10:47:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.150 10:47:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:30.150 [2024-10-30 10:47:51.400105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.087 "name": "raid_bdev1", 00:19:31.087 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:31.087 "strip_size_kb": 0, 00:19:31.087 "state": "online", 00:19:31.087 "raid_level": "raid1", 00:19:31.087 "superblock": true, 00:19:31.087 "num_base_bdevs": 4, 00:19:31.087 "num_base_bdevs_discovered": 3, 00:19:31.087 "num_base_bdevs_operational": 3, 00:19:31.087 "process": { 00:19:31.087 "type": "rebuild", 00:19:31.087 "target": "spare", 00:19:31.087 "progress": { 00:19:31.087 "blocks": 20480, 00:19:31.087 "percent": 32 00:19:31.087 } 00:19:31.087 }, 00:19:31.087 "base_bdevs_list": [ 00:19:31.087 { 00:19:31.087 "name": "spare", 00:19:31.087 "uuid": "503b5c5d-d246-5574-858d-a2450b305ad8", 00:19:31.087 "is_configured": true, 00:19:31.087 "data_offset": 2048, 00:19:31.087 "data_size": 63488 00:19:31.087 }, 00:19:31.087 { 00:19:31.087 "name": null, 00:19:31.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.087 "is_configured": false, 00:19:31.087 "data_offset": 2048, 00:19:31.087 "data_size": 63488 00:19:31.087 }, 00:19:31.087 { 00:19:31.087 "name": "BaseBdev3", 00:19:31.087 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:31.087 "is_configured": true, 00:19:31.087 "data_offset": 2048, 00:19:31.087 "data_size": 63488 00:19:31.087 }, 00:19:31.087 { 00:19:31.087 "name": "BaseBdev4", 00:19:31.087 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:31.087 "is_configured": true, 00:19:31.087 "data_offset": 2048, 00:19:31.087 "data_size": 63488 00:19:31.087 } 00:19:31.087 ] 00:19:31.087 }' 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.087 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.346 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.346 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:31.346 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.346 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.346 [2024-10-30 10:47:52.569697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.346 [2024-10-30 10:47:52.608660] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:31.346 [2024-10-30 10:47:52.608753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.346 [2024-10-30 10:47:52.608783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.346 [2024-10-30 10:47:52.608794] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:31.346 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.346 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:31.346 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.347 "name": "raid_bdev1", 00:19:31.347 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:31.347 "strip_size_kb": 0, 00:19:31.347 "state": "online", 00:19:31.347 "raid_level": "raid1", 00:19:31.347 "superblock": true, 00:19:31.347 "num_base_bdevs": 4, 00:19:31.347 "num_base_bdevs_discovered": 2, 00:19:31.347 "num_base_bdevs_operational": 2, 00:19:31.347 "base_bdevs_list": [ 00:19:31.347 { 00:19:31.347 "name": null, 00:19:31.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.347 "is_configured": false, 00:19:31.347 "data_offset": 0, 00:19:31.347 "data_size": 63488 00:19:31.347 }, 00:19:31.347 { 00:19:31.347 "name": null, 00:19:31.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.347 "is_configured": false, 00:19:31.347 "data_offset": 2048, 00:19:31.347 "data_size": 63488 00:19:31.347 }, 00:19:31.347 { 00:19:31.347 "name": "BaseBdev3", 00:19:31.347 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:31.347 "is_configured": true, 00:19:31.347 "data_offset": 2048, 00:19:31.347 "data_size": 63488 00:19:31.347 }, 00:19:31.347 { 00:19:31.347 "name": "BaseBdev4", 00:19:31.347 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:31.347 "is_configured": true, 00:19:31.347 "data_offset": 2048, 00:19:31.347 "data_size": 63488 00:19:31.347 } 00:19:31.347 ] 00:19:31.347 }' 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.347 10:47:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.914 "name": "raid_bdev1", 00:19:31.914 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:31.914 "strip_size_kb": 0, 00:19:31.914 "state": "online", 00:19:31.914 "raid_level": "raid1", 00:19:31.914 "superblock": true, 00:19:31.914 "num_base_bdevs": 4, 00:19:31.914 "num_base_bdevs_discovered": 2, 00:19:31.914 "num_base_bdevs_operational": 2, 00:19:31.914 "base_bdevs_list": [ 00:19:31.914 { 00:19:31.914 "name": null, 00:19:31.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.914 "is_configured": false, 00:19:31.914 "data_offset": 0, 00:19:31.914 "data_size": 63488 00:19:31.914 }, 00:19:31.914 { 00:19:31.914 "name": null, 00:19:31.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.914 "is_configured": false, 00:19:31.914 "data_offset": 2048, 00:19:31.914 "data_size": 63488 00:19:31.914 }, 00:19:31.914 { 00:19:31.914 "name": "BaseBdev3", 00:19:31.914 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:31.914 "is_configured": true, 00:19:31.914 "data_offset": 2048, 00:19:31.914 "data_size": 63488 00:19:31.914 }, 00:19:31.914 { 00:19:31.914 "name": "BaseBdev4", 00:19:31.914 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:31.914 "is_configured": true, 00:19:31.914 "data_offset": 2048, 00:19:31.914 "data_size": 63488 00:19:31.914 } 00:19:31.914 ] 00:19:31.914 }' 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.914 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:31.914 [2024-10-30 10:47:53.316265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:31.914 [2024-10-30 10:47:53.316492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.914 [2024-10-30 10:47:53.316550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:19:31.914 [2024-10-30 10:47:53.316565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.914 [2024-10-30 10:47:53.317202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.914 [2024-10-30 10:47:53.317228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:31.914 [2024-10-30 10:47:53.317345] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:31.914 [2024-10-30 10:47:53.317386] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:31.914 [2024-10-30 10:47:53.317399] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:31.914 [2024-10-30 10:47:53.317411] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:31.915 BaseBdev1 00:19:31.915 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.915 10:47:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.293 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.293 "name": "raid_bdev1", 00:19:33.294 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:33.294 "strip_size_kb": 0, 00:19:33.294 "state": "online", 00:19:33.294 "raid_level": "raid1", 00:19:33.294 "superblock": true, 00:19:33.294 "num_base_bdevs": 4, 00:19:33.294 "num_base_bdevs_discovered": 2, 00:19:33.294 "num_base_bdevs_operational": 2, 00:19:33.294 "base_bdevs_list": [ 00:19:33.294 { 00:19:33.294 "name": null, 00:19:33.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.294 "is_configured": false, 00:19:33.294 "data_offset": 0, 00:19:33.294 "data_size": 63488 00:19:33.294 }, 00:19:33.294 { 00:19:33.294 "name": null, 00:19:33.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.294 "is_configured": false, 00:19:33.294 "data_offset": 2048, 00:19:33.294 "data_size": 63488 00:19:33.294 }, 00:19:33.294 { 00:19:33.294 "name": "BaseBdev3", 00:19:33.294 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:33.294 "is_configured": true, 00:19:33.294 "data_offset": 2048, 00:19:33.294 "data_size": 63488 00:19:33.294 }, 00:19:33.294 { 00:19:33.294 "name": "BaseBdev4", 00:19:33.294 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:33.294 "is_configured": true, 00:19:33.294 "data_offset": 2048, 00:19:33.294 "data_size": 63488 00:19:33.294 } 00:19:33.294 ] 00:19:33.294 }' 00:19:33.294 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.294 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.553 "name": "raid_bdev1", 00:19:33.553 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:33.553 "strip_size_kb": 0, 00:19:33.553 "state": "online", 00:19:33.553 "raid_level": "raid1", 00:19:33.553 "superblock": true, 00:19:33.553 "num_base_bdevs": 4, 00:19:33.553 "num_base_bdevs_discovered": 2, 00:19:33.553 "num_base_bdevs_operational": 2, 00:19:33.553 "base_bdevs_list": [ 00:19:33.553 { 00:19:33.553 "name": null, 00:19:33.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.553 "is_configured": false, 00:19:33.553 "data_offset": 0, 00:19:33.553 "data_size": 63488 00:19:33.553 }, 00:19:33.553 { 00:19:33.553 "name": null, 00:19:33.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.553 "is_configured": false, 00:19:33.553 "data_offset": 2048, 00:19:33.553 "data_size": 63488 00:19:33.553 }, 00:19:33.553 { 00:19:33.553 "name": "BaseBdev3", 00:19:33.553 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:33.553 "is_configured": true, 00:19:33.553 "data_offset": 2048, 00:19:33.553 "data_size": 63488 00:19:33.553 }, 00:19:33.553 { 00:19:33.553 "name": "BaseBdev4", 00:19:33.553 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:33.553 "is_configured": true, 00:19:33.553 "data_offset": 2048, 00:19:33.553 "data_size": 63488 00:19:33.553 } 00:19:33.553 ] 00:19:33.553 }' 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.553 10:47:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:33.553 [2024-10-30 10:47:54.997132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:33.553 [2024-10-30 10:47:54.997496] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:33.553 [2024-10-30 10:47:54.997647] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:33.553 request: 00:19:33.553 { 00:19:33.553 "base_bdev": "BaseBdev1", 00:19:33.553 "raid_bdev": "raid_bdev1", 00:19:33.553 "method": "bdev_raid_add_base_bdev", 00:19:33.553 "req_id": 1 00:19:33.553 } 00:19:33.553 Got JSON-RPC error response 00:19:33.553 response: 00:19:33.553 { 00:19:33.553 "code": -22, 00:19:33.553 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:33.553 } 00:19:33.553 10:47:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:33.553 10:47:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:19:33.553 10:47:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:33.553 10:47:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:33.553 10:47:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:33.553 10:47:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.931 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.931 "name": "raid_bdev1", 00:19:34.931 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:34.931 "strip_size_kb": 0, 00:19:34.931 "state": "online", 00:19:34.931 "raid_level": "raid1", 00:19:34.931 "superblock": true, 00:19:34.931 "num_base_bdevs": 4, 00:19:34.931 "num_base_bdevs_discovered": 2, 00:19:34.931 "num_base_bdevs_operational": 2, 00:19:34.931 "base_bdevs_list": [ 00:19:34.931 { 00:19:34.931 "name": null, 00:19:34.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.932 "is_configured": false, 00:19:34.932 "data_offset": 0, 00:19:34.932 "data_size": 63488 00:19:34.932 }, 00:19:34.932 { 00:19:34.932 "name": null, 00:19:34.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.932 "is_configured": false, 00:19:34.932 "data_offset": 2048, 00:19:34.932 "data_size": 63488 00:19:34.932 }, 00:19:34.932 { 00:19:34.932 "name": "BaseBdev3", 00:19:34.932 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:34.932 "is_configured": true, 00:19:34.932 "data_offset": 2048, 00:19:34.932 "data_size": 63488 00:19:34.932 }, 00:19:34.932 { 00:19:34.932 "name": "BaseBdev4", 00:19:34.932 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:34.932 "is_configured": true, 00:19:34.932 "data_offset": 2048, 00:19:34.932 "data_size": 63488 00:19:34.932 } 00:19:34.932 ] 00:19:34.932 }' 00:19:34.932 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.932 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.191 "name": "raid_bdev1", 00:19:35.191 "uuid": "c3ad299c-37e9-460b-a040-69f9ffe363ec", 00:19:35.191 "strip_size_kb": 0, 00:19:35.191 "state": "online", 00:19:35.191 "raid_level": "raid1", 00:19:35.191 "superblock": true, 00:19:35.191 "num_base_bdevs": 4, 00:19:35.191 "num_base_bdevs_discovered": 2, 00:19:35.191 "num_base_bdevs_operational": 2, 00:19:35.191 "base_bdevs_list": [ 00:19:35.191 { 00:19:35.191 "name": null, 00:19:35.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.191 "is_configured": false, 00:19:35.191 "data_offset": 0, 00:19:35.191 "data_size": 63488 00:19:35.191 }, 00:19:35.191 { 00:19:35.191 "name": null, 00:19:35.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.191 "is_configured": false, 00:19:35.191 "data_offset": 2048, 00:19:35.191 "data_size": 63488 00:19:35.191 }, 00:19:35.191 { 00:19:35.191 "name": "BaseBdev3", 00:19:35.191 "uuid": "020045f0-8281-53c3-a347-f77960bc2866", 00:19:35.191 "is_configured": true, 00:19:35.191 "data_offset": 2048, 00:19:35.191 "data_size": 63488 00:19:35.191 }, 00:19:35.191 { 00:19:35.191 "name": "BaseBdev4", 00:19:35.191 "uuid": "a48c67c2-63b1-552e-a8b7-352bb6a9aee8", 00:19:35.191 "is_configured": true, 00:19:35.191 "data_offset": 2048, 00:19:35.191 "data_size": 63488 00:19:35.191 } 00:19:35.191 ] 00:19:35.191 }' 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:35.191 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79641 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79641 ']' 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79641 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79641 00:19:35.451 killing process with pid 79641 00:19:35.451 Received shutdown signal, test time was about 19.303670 seconds 00:19:35.451 00:19:35.451 Latency(us) 00:19:35.451 [2024-10-30T10:47:56.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.451 [2024-10-30T10:47:56.921Z] =================================================================================================================== 00:19:35.451 [2024-10-30T10:47:56.921Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79641' 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79641 00:19:35.451 [2024-10-30 10:47:56.732555] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:35.451 10:47:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79641 00:19:35.451 [2024-10-30 10:47:56.732739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.451 [2024-10-30 10:47:56.732830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:35.451 [2024-10-30 10:47:56.732854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:35.710 [2024-10-30 10:47:57.097796] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:37.105 10:47:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:37.105 00:19:37.105 real 0m22.924s 00:19:37.105 user 0m31.287s 00:19:37.105 sys 0m2.388s 00:19:37.105 10:47:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:37.105 ************************************ 00:19:37.105 END TEST raid_rebuild_test_sb_io 00:19:37.105 ************************************ 00:19:37.105 10:47:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:37.105 10:47:58 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:37.105 10:47:58 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:19:37.105 10:47:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:37.105 10:47:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:37.105 10:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.105 ************************************ 00:19:37.105 START TEST raid5f_state_function_test 00:19:37.105 ************************************ 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:37.105 Process raid pid: 80370 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80370 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80370' 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80370 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80370 ']' 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.105 10:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.105 [2024-10-30 10:47:58.346027] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:19:37.105 [2024-10-30 10:47:58.346394] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.105 [2024-10-30 10:47:58.521507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.364 [2024-10-30 10:47:58.648693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.623 [2024-10-30 10:47:58.852280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.623 [2024-10-30 10:47:58.852508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.883 [2024-10-30 10:47:59.343138] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:37.883 [2024-10-30 10:47:59.343356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:37.883 [2024-10-30 10:47:59.343385] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:37.883 [2024-10-30 10:47:59.343404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:37.883 [2024-10-30 10:47:59.343415] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:37.883 [2024-10-30 10:47:59.343429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.883 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.142 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.142 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.142 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.142 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.142 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.142 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.142 "name": "Existed_Raid", 00:19:38.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.142 "strip_size_kb": 64, 00:19:38.142 "state": "configuring", 00:19:38.142 "raid_level": "raid5f", 00:19:38.142 "superblock": false, 00:19:38.142 "num_base_bdevs": 3, 00:19:38.142 "num_base_bdevs_discovered": 0, 00:19:38.142 "num_base_bdevs_operational": 3, 00:19:38.142 "base_bdevs_list": [ 00:19:38.142 { 00:19:38.142 "name": "BaseBdev1", 00:19:38.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.142 "is_configured": false, 00:19:38.142 "data_offset": 0, 00:19:38.142 "data_size": 0 00:19:38.142 }, 00:19:38.142 { 00:19:38.142 "name": "BaseBdev2", 00:19:38.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.142 "is_configured": false, 00:19:38.142 "data_offset": 0, 00:19:38.142 "data_size": 0 00:19:38.142 }, 00:19:38.142 { 00:19:38.142 "name": "BaseBdev3", 00:19:38.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.142 "is_configured": false, 00:19:38.142 "data_offset": 0, 00:19:38.142 "data_size": 0 00:19:38.142 } 00:19:38.142 ] 00:19:38.142 }' 00:19:38.142 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.142 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.401 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:38.401 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.401 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.401 [2024-10-30 10:47:59.859310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:38.401 [2024-10-30 10:47:59.859484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:38.401 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.401 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:38.401 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.401 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.401 [2024-10-30 10:47:59.867268] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.401 [2024-10-30 10:47:59.867442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.401 [2024-10-30 10:47:59.867560] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.401 [2024-10-30 10:47:59.867620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.401 [2024-10-30 10:47:59.867721] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:38.401 [2024-10-30 10:47:59.867847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.660 [2024-10-30 10:47:59.913163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:38.660 BaseBdev1 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.660 [ 00:19:38.660 { 00:19:38.660 "name": "BaseBdev1", 00:19:38.660 "aliases": [ 00:19:38.660 "7791592e-9f35-4f0c-b01e-cb4562833604" 00:19:38.660 ], 00:19:38.660 "product_name": "Malloc disk", 00:19:38.660 "block_size": 512, 00:19:38.660 "num_blocks": 65536, 00:19:38.660 "uuid": "7791592e-9f35-4f0c-b01e-cb4562833604", 00:19:38.660 "assigned_rate_limits": { 00:19:38.660 "rw_ios_per_sec": 0, 00:19:38.660 "rw_mbytes_per_sec": 0, 00:19:38.660 "r_mbytes_per_sec": 0, 00:19:38.660 "w_mbytes_per_sec": 0 00:19:38.660 }, 00:19:38.660 "claimed": true, 00:19:38.660 "claim_type": "exclusive_write", 00:19:38.660 "zoned": false, 00:19:38.660 "supported_io_types": { 00:19:38.660 "read": true, 00:19:38.660 "write": true, 00:19:38.660 "unmap": true, 00:19:38.660 "flush": true, 00:19:38.660 "reset": true, 00:19:38.660 "nvme_admin": false, 00:19:38.660 "nvme_io": false, 00:19:38.660 "nvme_io_md": false, 00:19:38.660 "write_zeroes": true, 00:19:38.660 "zcopy": true, 00:19:38.660 "get_zone_info": false, 00:19:38.660 "zone_management": false, 00:19:38.660 "zone_append": false, 00:19:38.660 "compare": false, 00:19:38.660 "compare_and_write": false, 00:19:38.660 "abort": true, 00:19:38.660 "seek_hole": false, 00:19:38.660 "seek_data": false, 00:19:38.660 "copy": true, 00:19:38.660 "nvme_iov_md": false 00:19:38.660 }, 00:19:38.660 "memory_domains": [ 00:19:38.660 { 00:19:38.660 "dma_device_id": "system", 00:19:38.660 "dma_device_type": 1 00:19:38.660 }, 00:19:38.660 { 00:19:38.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.660 "dma_device_type": 2 00:19:38.660 } 00:19:38.660 ], 00:19:38.660 "driver_specific": {} 00:19:38.660 } 00:19:38.660 ] 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.660 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.661 10:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.661 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.661 "name": "Existed_Raid", 00:19:38.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.661 "strip_size_kb": 64, 00:19:38.661 "state": "configuring", 00:19:38.661 "raid_level": "raid5f", 00:19:38.661 "superblock": false, 00:19:38.661 "num_base_bdevs": 3, 00:19:38.661 "num_base_bdevs_discovered": 1, 00:19:38.661 "num_base_bdevs_operational": 3, 00:19:38.661 "base_bdevs_list": [ 00:19:38.661 { 00:19:38.661 "name": "BaseBdev1", 00:19:38.661 "uuid": "7791592e-9f35-4f0c-b01e-cb4562833604", 00:19:38.661 "is_configured": true, 00:19:38.661 "data_offset": 0, 00:19:38.661 "data_size": 65536 00:19:38.661 }, 00:19:38.661 { 00:19:38.661 "name": "BaseBdev2", 00:19:38.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.661 "is_configured": false, 00:19:38.661 "data_offset": 0, 00:19:38.661 "data_size": 0 00:19:38.661 }, 00:19:38.661 { 00:19:38.661 "name": "BaseBdev3", 00:19:38.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.661 "is_configured": false, 00:19:38.661 "data_offset": 0, 00:19:38.661 "data_size": 0 00:19:38.661 } 00:19:38.661 ] 00:19:38.661 }' 00:19:38.661 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.661 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.228 [2024-10-30 10:48:00.485346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.228 [2024-10-30 10:48:00.485423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.228 [2024-10-30 10:48:00.493419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:39.228 [2024-10-30 10:48:00.496102] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:39.228 [2024-10-30 10:48:00.496278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:39.228 [2024-10-30 10:48:00.496399] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:39.228 [2024-10-30 10:48:00.496459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.228 "name": "Existed_Raid", 00:19:39.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.228 "strip_size_kb": 64, 00:19:39.228 "state": "configuring", 00:19:39.228 "raid_level": "raid5f", 00:19:39.228 "superblock": false, 00:19:39.228 "num_base_bdevs": 3, 00:19:39.228 "num_base_bdevs_discovered": 1, 00:19:39.228 "num_base_bdevs_operational": 3, 00:19:39.228 "base_bdevs_list": [ 00:19:39.228 { 00:19:39.228 "name": "BaseBdev1", 00:19:39.228 "uuid": "7791592e-9f35-4f0c-b01e-cb4562833604", 00:19:39.228 "is_configured": true, 00:19:39.228 "data_offset": 0, 00:19:39.228 "data_size": 65536 00:19:39.228 }, 00:19:39.228 { 00:19:39.228 "name": "BaseBdev2", 00:19:39.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.228 "is_configured": false, 00:19:39.228 "data_offset": 0, 00:19:39.228 "data_size": 0 00:19:39.228 }, 00:19:39.228 { 00:19:39.228 "name": "BaseBdev3", 00:19:39.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.228 "is_configured": false, 00:19:39.228 "data_offset": 0, 00:19:39.228 "data_size": 0 00:19:39.228 } 00:19:39.228 ] 00:19:39.228 }' 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.228 10:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.796 [2024-10-30 10:48:01.048418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:39.796 BaseBdev2 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.796 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.796 [ 00:19:39.796 { 00:19:39.796 "name": "BaseBdev2", 00:19:39.796 "aliases": [ 00:19:39.796 "4fc46660-f553-4594-ac39-813506e3cc9c" 00:19:39.796 ], 00:19:39.796 "product_name": "Malloc disk", 00:19:39.796 "block_size": 512, 00:19:39.796 "num_blocks": 65536, 00:19:39.796 "uuid": "4fc46660-f553-4594-ac39-813506e3cc9c", 00:19:39.796 "assigned_rate_limits": { 00:19:39.796 "rw_ios_per_sec": 0, 00:19:39.796 "rw_mbytes_per_sec": 0, 00:19:39.796 "r_mbytes_per_sec": 0, 00:19:39.796 "w_mbytes_per_sec": 0 00:19:39.796 }, 00:19:39.796 "claimed": true, 00:19:39.796 "claim_type": "exclusive_write", 00:19:39.797 "zoned": false, 00:19:39.797 "supported_io_types": { 00:19:39.797 "read": true, 00:19:39.797 "write": true, 00:19:39.797 "unmap": true, 00:19:39.797 "flush": true, 00:19:39.797 "reset": true, 00:19:39.797 "nvme_admin": false, 00:19:39.797 "nvme_io": false, 00:19:39.797 "nvme_io_md": false, 00:19:39.797 "write_zeroes": true, 00:19:39.797 "zcopy": true, 00:19:39.797 "get_zone_info": false, 00:19:39.797 "zone_management": false, 00:19:39.797 "zone_append": false, 00:19:39.797 "compare": false, 00:19:39.797 "compare_and_write": false, 00:19:39.797 "abort": true, 00:19:39.797 "seek_hole": false, 00:19:39.797 "seek_data": false, 00:19:39.797 "copy": true, 00:19:39.797 "nvme_iov_md": false 00:19:39.797 }, 00:19:39.797 "memory_domains": [ 00:19:39.797 { 00:19:39.797 "dma_device_id": "system", 00:19:39.797 "dma_device_type": 1 00:19:39.797 }, 00:19:39.797 { 00:19:39.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.797 "dma_device_type": 2 00:19:39.797 } 00:19:39.797 ], 00:19:39.797 "driver_specific": {} 00:19:39.797 } 00:19:39.797 ] 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.797 "name": "Existed_Raid", 00:19:39.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.797 "strip_size_kb": 64, 00:19:39.797 "state": "configuring", 00:19:39.797 "raid_level": "raid5f", 00:19:39.797 "superblock": false, 00:19:39.797 "num_base_bdevs": 3, 00:19:39.797 "num_base_bdevs_discovered": 2, 00:19:39.797 "num_base_bdevs_operational": 3, 00:19:39.797 "base_bdevs_list": [ 00:19:39.797 { 00:19:39.797 "name": "BaseBdev1", 00:19:39.797 "uuid": "7791592e-9f35-4f0c-b01e-cb4562833604", 00:19:39.797 "is_configured": true, 00:19:39.797 "data_offset": 0, 00:19:39.797 "data_size": 65536 00:19:39.797 }, 00:19:39.797 { 00:19:39.797 "name": "BaseBdev2", 00:19:39.797 "uuid": "4fc46660-f553-4594-ac39-813506e3cc9c", 00:19:39.797 "is_configured": true, 00:19:39.797 "data_offset": 0, 00:19:39.797 "data_size": 65536 00:19:39.797 }, 00:19:39.797 { 00:19:39.797 "name": "BaseBdev3", 00:19:39.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.797 "is_configured": false, 00:19:39.797 "data_offset": 0, 00:19:39.797 "data_size": 0 00:19:39.797 } 00:19:39.797 ] 00:19:39.797 }' 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.797 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.410 [2024-10-30 10:48:01.668789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:40.410 [2024-10-30 10:48:01.668881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:40.410 [2024-10-30 10:48:01.668907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:40.410 [2024-10-30 10:48:01.669296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:40.410 BaseBdev3 00:19:40.410 [2024-10-30 10:48:01.674652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:40.410 [2024-10-30 10:48:01.674678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:40.410 [2024-10-30 10:48:01.675066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.410 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.410 [ 00:19:40.410 { 00:19:40.410 "name": "BaseBdev3", 00:19:40.410 "aliases": [ 00:19:40.410 "87f1936f-d928-4287-8dad-aa7f0eccbc81" 00:19:40.410 ], 00:19:40.410 "product_name": "Malloc disk", 00:19:40.410 "block_size": 512, 00:19:40.410 "num_blocks": 65536, 00:19:40.410 "uuid": "87f1936f-d928-4287-8dad-aa7f0eccbc81", 00:19:40.410 "assigned_rate_limits": { 00:19:40.410 "rw_ios_per_sec": 0, 00:19:40.410 "rw_mbytes_per_sec": 0, 00:19:40.410 "r_mbytes_per_sec": 0, 00:19:40.410 "w_mbytes_per_sec": 0 00:19:40.410 }, 00:19:40.410 "claimed": true, 00:19:40.410 "claim_type": "exclusive_write", 00:19:40.410 "zoned": false, 00:19:40.410 "supported_io_types": { 00:19:40.410 "read": true, 00:19:40.410 "write": true, 00:19:40.410 "unmap": true, 00:19:40.410 "flush": true, 00:19:40.410 "reset": true, 00:19:40.410 "nvme_admin": false, 00:19:40.410 "nvme_io": false, 00:19:40.410 "nvme_io_md": false, 00:19:40.410 "write_zeroes": true, 00:19:40.410 "zcopy": true, 00:19:40.410 "get_zone_info": false, 00:19:40.410 "zone_management": false, 00:19:40.410 "zone_append": false, 00:19:40.411 "compare": false, 00:19:40.411 "compare_and_write": false, 00:19:40.411 "abort": true, 00:19:40.411 "seek_hole": false, 00:19:40.411 "seek_data": false, 00:19:40.411 "copy": true, 00:19:40.411 "nvme_iov_md": false 00:19:40.411 }, 00:19:40.411 "memory_domains": [ 00:19:40.411 { 00:19:40.411 "dma_device_id": "system", 00:19:40.411 "dma_device_type": 1 00:19:40.411 }, 00:19:40.411 { 00:19:40.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.411 "dma_device_type": 2 00:19:40.411 } 00:19:40.411 ], 00:19:40.411 "driver_specific": {} 00:19:40.411 } 00:19:40.411 ] 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.411 "name": "Existed_Raid", 00:19:40.411 "uuid": "11254c31-6167-4c62-bca9-98c936c9c4cd", 00:19:40.411 "strip_size_kb": 64, 00:19:40.411 "state": "online", 00:19:40.411 "raid_level": "raid5f", 00:19:40.411 "superblock": false, 00:19:40.411 "num_base_bdevs": 3, 00:19:40.411 "num_base_bdevs_discovered": 3, 00:19:40.411 "num_base_bdevs_operational": 3, 00:19:40.411 "base_bdevs_list": [ 00:19:40.411 { 00:19:40.411 "name": "BaseBdev1", 00:19:40.411 "uuid": "7791592e-9f35-4f0c-b01e-cb4562833604", 00:19:40.411 "is_configured": true, 00:19:40.411 "data_offset": 0, 00:19:40.411 "data_size": 65536 00:19:40.411 }, 00:19:40.411 { 00:19:40.411 "name": "BaseBdev2", 00:19:40.411 "uuid": "4fc46660-f553-4594-ac39-813506e3cc9c", 00:19:40.411 "is_configured": true, 00:19:40.411 "data_offset": 0, 00:19:40.411 "data_size": 65536 00:19:40.411 }, 00:19:40.411 { 00:19:40.411 "name": "BaseBdev3", 00:19:40.411 "uuid": "87f1936f-d928-4287-8dad-aa7f0eccbc81", 00:19:40.411 "is_configured": true, 00:19:40.411 "data_offset": 0, 00:19:40.411 "data_size": 65536 00:19:40.411 } 00:19:40.411 ] 00:19:40.411 }' 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.411 10:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:40.978 [2024-10-30 10:48:02.281224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:40.978 "name": "Existed_Raid", 00:19:40.978 "aliases": [ 00:19:40.978 "11254c31-6167-4c62-bca9-98c936c9c4cd" 00:19:40.978 ], 00:19:40.978 "product_name": "Raid Volume", 00:19:40.978 "block_size": 512, 00:19:40.978 "num_blocks": 131072, 00:19:40.978 "uuid": "11254c31-6167-4c62-bca9-98c936c9c4cd", 00:19:40.978 "assigned_rate_limits": { 00:19:40.978 "rw_ios_per_sec": 0, 00:19:40.978 "rw_mbytes_per_sec": 0, 00:19:40.978 "r_mbytes_per_sec": 0, 00:19:40.978 "w_mbytes_per_sec": 0 00:19:40.978 }, 00:19:40.978 "claimed": false, 00:19:40.978 "zoned": false, 00:19:40.978 "supported_io_types": { 00:19:40.978 "read": true, 00:19:40.978 "write": true, 00:19:40.978 "unmap": false, 00:19:40.978 "flush": false, 00:19:40.978 "reset": true, 00:19:40.978 "nvme_admin": false, 00:19:40.978 "nvme_io": false, 00:19:40.978 "nvme_io_md": false, 00:19:40.978 "write_zeroes": true, 00:19:40.978 "zcopy": false, 00:19:40.978 "get_zone_info": false, 00:19:40.978 "zone_management": false, 00:19:40.978 "zone_append": false, 00:19:40.978 "compare": false, 00:19:40.978 "compare_and_write": false, 00:19:40.978 "abort": false, 00:19:40.978 "seek_hole": false, 00:19:40.978 "seek_data": false, 00:19:40.978 "copy": false, 00:19:40.978 "nvme_iov_md": false 00:19:40.978 }, 00:19:40.978 "driver_specific": { 00:19:40.978 "raid": { 00:19:40.978 "uuid": "11254c31-6167-4c62-bca9-98c936c9c4cd", 00:19:40.978 "strip_size_kb": 64, 00:19:40.978 "state": "online", 00:19:40.978 "raid_level": "raid5f", 00:19:40.978 "superblock": false, 00:19:40.978 "num_base_bdevs": 3, 00:19:40.978 "num_base_bdevs_discovered": 3, 00:19:40.978 "num_base_bdevs_operational": 3, 00:19:40.978 "base_bdevs_list": [ 00:19:40.978 { 00:19:40.978 "name": "BaseBdev1", 00:19:40.978 "uuid": "7791592e-9f35-4f0c-b01e-cb4562833604", 00:19:40.978 "is_configured": true, 00:19:40.978 "data_offset": 0, 00:19:40.978 "data_size": 65536 00:19:40.978 }, 00:19:40.978 { 00:19:40.978 "name": "BaseBdev2", 00:19:40.978 "uuid": "4fc46660-f553-4594-ac39-813506e3cc9c", 00:19:40.978 "is_configured": true, 00:19:40.978 "data_offset": 0, 00:19:40.978 "data_size": 65536 00:19:40.978 }, 00:19:40.978 { 00:19:40.978 "name": "BaseBdev3", 00:19:40.978 "uuid": "87f1936f-d928-4287-8dad-aa7f0eccbc81", 00:19:40.978 "is_configured": true, 00:19:40.978 "data_offset": 0, 00:19:40.978 "data_size": 65536 00:19:40.978 } 00:19:40.978 ] 00:19:40.978 } 00:19:40.978 } 00:19:40.978 }' 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:40.978 BaseBdev2 00:19:40.978 BaseBdev3' 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.978 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.236 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.236 [2024-10-30 10:48:02.621143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.494 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.495 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.495 "name": "Existed_Raid", 00:19:41.495 "uuid": "11254c31-6167-4c62-bca9-98c936c9c4cd", 00:19:41.495 "strip_size_kb": 64, 00:19:41.495 "state": "online", 00:19:41.495 "raid_level": "raid5f", 00:19:41.495 "superblock": false, 00:19:41.495 "num_base_bdevs": 3, 00:19:41.495 "num_base_bdevs_discovered": 2, 00:19:41.495 "num_base_bdevs_operational": 2, 00:19:41.495 "base_bdevs_list": [ 00:19:41.495 { 00:19:41.495 "name": null, 00:19:41.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.495 "is_configured": false, 00:19:41.495 "data_offset": 0, 00:19:41.495 "data_size": 65536 00:19:41.495 }, 00:19:41.495 { 00:19:41.495 "name": "BaseBdev2", 00:19:41.495 "uuid": "4fc46660-f553-4594-ac39-813506e3cc9c", 00:19:41.495 "is_configured": true, 00:19:41.495 "data_offset": 0, 00:19:41.495 "data_size": 65536 00:19:41.495 }, 00:19:41.495 { 00:19:41.495 "name": "BaseBdev3", 00:19:41.495 "uuid": "87f1936f-d928-4287-8dad-aa7f0eccbc81", 00:19:41.495 "is_configured": true, 00:19:41.495 "data_offset": 0, 00:19:41.495 "data_size": 65536 00:19:41.495 } 00:19:41.495 ] 00:19:41.495 }' 00:19:41.495 10:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.495 10:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.061 [2024-10-30 10:48:03.282969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:42.061 [2024-10-30 10:48:03.283267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:42.061 [2024-10-30 10:48:03.369091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.061 [2024-10-30 10:48:03.421185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:42.061 [2024-10-30 10:48:03.421378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:42.061 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.319 BaseBdev2 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.319 [ 00:19:42.319 { 00:19:42.319 "name": "BaseBdev2", 00:19:42.319 "aliases": [ 00:19:42.319 "25d32c8d-da89-4d04-b8b5-0447659d4037" 00:19:42.319 ], 00:19:42.319 "product_name": "Malloc disk", 00:19:42.319 "block_size": 512, 00:19:42.319 "num_blocks": 65536, 00:19:42.319 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:42.319 "assigned_rate_limits": { 00:19:42.319 "rw_ios_per_sec": 0, 00:19:42.319 "rw_mbytes_per_sec": 0, 00:19:42.319 "r_mbytes_per_sec": 0, 00:19:42.319 "w_mbytes_per_sec": 0 00:19:42.319 }, 00:19:42.319 "claimed": false, 00:19:42.319 "zoned": false, 00:19:42.319 "supported_io_types": { 00:19:42.319 "read": true, 00:19:42.319 "write": true, 00:19:42.319 "unmap": true, 00:19:42.319 "flush": true, 00:19:42.319 "reset": true, 00:19:42.319 "nvme_admin": false, 00:19:42.319 "nvme_io": false, 00:19:42.319 "nvme_io_md": false, 00:19:42.319 "write_zeroes": true, 00:19:42.319 "zcopy": true, 00:19:42.319 "get_zone_info": false, 00:19:42.319 "zone_management": false, 00:19:42.319 "zone_append": false, 00:19:42.319 "compare": false, 00:19:42.319 "compare_and_write": false, 00:19:42.319 "abort": true, 00:19:42.319 "seek_hole": false, 00:19:42.319 "seek_data": false, 00:19:42.319 "copy": true, 00:19:42.319 "nvme_iov_md": false 00:19:42.319 }, 00:19:42.319 "memory_domains": [ 00:19:42.319 { 00:19:42.319 "dma_device_id": "system", 00:19:42.319 "dma_device_type": 1 00:19:42.319 }, 00:19:42.319 { 00:19:42.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.319 "dma_device_type": 2 00:19:42.319 } 00:19:42.319 ], 00:19:42.319 "driver_specific": {} 00:19:42.319 } 00:19:42.319 ] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.319 BaseBdev3 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.319 [ 00:19:42.319 { 00:19:42.319 "name": "BaseBdev3", 00:19:42.319 "aliases": [ 00:19:42.319 "be5ffea5-ab64-4da4-b919-665e20fa4508" 00:19:42.319 ], 00:19:42.319 "product_name": "Malloc disk", 00:19:42.319 "block_size": 512, 00:19:42.319 "num_blocks": 65536, 00:19:42.319 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:42.319 "assigned_rate_limits": { 00:19:42.319 "rw_ios_per_sec": 0, 00:19:42.319 "rw_mbytes_per_sec": 0, 00:19:42.319 "r_mbytes_per_sec": 0, 00:19:42.319 "w_mbytes_per_sec": 0 00:19:42.319 }, 00:19:42.319 "claimed": false, 00:19:42.319 "zoned": false, 00:19:42.319 "supported_io_types": { 00:19:42.319 "read": true, 00:19:42.319 "write": true, 00:19:42.319 "unmap": true, 00:19:42.319 "flush": true, 00:19:42.319 "reset": true, 00:19:42.319 "nvme_admin": false, 00:19:42.319 "nvme_io": false, 00:19:42.319 "nvme_io_md": false, 00:19:42.319 "write_zeroes": true, 00:19:42.319 "zcopy": true, 00:19:42.319 "get_zone_info": false, 00:19:42.319 "zone_management": false, 00:19:42.319 "zone_append": false, 00:19:42.319 "compare": false, 00:19:42.319 "compare_and_write": false, 00:19:42.319 "abort": true, 00:19:42.319 "seek_hole": false, 00:19:42.319 "seek_data": false, 00:19:42.319 "copy": true, 00:19:42.319 "nvme_iov_md": false 00:19:42.319 }, 00:19:42.319 "memory_domains": [ 00:19:42.319 { 00:19:42.319 "dma_device_id": "system", 00:19:42.319 "dma_device_type": 1 00:19:42.319 }, 00:19:42.319 { 00:19:42.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.319 "dma_device_type": 2 00:19:42.319 } 00:19:42.319 ], 00:19:42.319 "driver_specific": {} 00:19:42.319 } 00:19:42.319 ] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.319 [2024-10-30 10:48:03.716404] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:42.319 [2024-10-30 10:48:03.716615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:42.319 [2024-10-30 10:48:03.716771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.319 [2024-10-30 10:48:03.719303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.319 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.320 "name": "Existed_Raid", 00:19:42.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.320 "strip_size_kb": 64, 00:19:42.320 "state": "configuring", 00:19:42.320 "raid_level": "raid5f", 00:19:42.320 "superblock": false, 00:19:42.320 "num_base_bdevs": 3, 00:19:42.320 "num_base_bdevs_discovered": 2, 00:19:42.320 "num_base_bdevs_operational": 3, 00:19:42.320 "base_bdevs_list": [ 00:19:42.320 { 00:19:42.320 "name": "BaseBdev1", 00:19:42.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.320 "is_configured": false, 00:19:42.320 "data_offset": 0, 00:19:42.320 "data_size": 0 00:19:42.320 }, 00:19:42.320 { 00:19:42.320 "name": "BaseBdev2", 00:19:42.320 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:42.320 "is_configured": true, 00:19:42.320 "data_offset": 0, 00:19:42.320 "data_size": 65536 00:19:42.320 }, 00:19:42.320 { 00:19:42.320 "name": "BaseBdev3", 00:19:42.320 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:42.320 "is_configured": true, 00:19:42.320 "data_offset": 0, 00:19:42.320 "data_size": 65536 00:19:42.320 } 00:19:42.320 ] 00:19:42.320 }' 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.320 10:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.884 [2024-10-30 10:48:04.240536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.884 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.884 "name": "Existed_Raid", 00:19:42.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.884 "strip_size_kb": 64, 00:19:42.884 "state": "configuring", 00:19:42.884 "raid_level": "raid5f", 00:19:42.884 "superblock": false, 00:19:42.884 "num_base_bdevs": 3, 00:19:42.884 "num_base_bdevs_discovered": 1, 00:19:42.884 "num_base_bdevs_operational": 3, 00:19:42.884 "base_bdevs_list": [ 00:19:42.884 { 00:19:42.884 "name": "BaseBdev1", 00:19:42.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.884 "is_configured": false, 00:19:42.884 "data_offset": 0, 00:19:42.885 "data_size": 0 00:19:42.885 }, 00:19:42.885 { 00:19:42.885 "name": null, 00:19:42.885 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:42.885 "is_configured": false, 00:19:42.885 "data_offset": 0, 00:19:42.885 "data_size": 65536 00:19:42.885 }, 00:19:42.885 { 00:19:42.885 "name": "BaseBdev3", 00:19:42.885 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:42.885 "is_configured": true, 00:19:42.885 "data_offset": 0, 00:19:42.885 "data_size": 65536 00:19:42.885 } 00:19:42.885 ] 00:19:42.885 }' 00:19:42.885 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.885 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.451 [2024-10-30 10:48:04.842787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.451 BaseBdev1 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.451 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.451 [ 00:19:43.451 { 00:19:43.451 "name": "BaseBdev1", 00:19:43.451 "aliases": [ 00:19:43.451 "f1c78461-6809-4973-b158-020f0e97a346" 00:19:43.451 ], 00:19:43.451 "product_name": "Malloc disk", 00:19:43.451 "block_size": 512, 00:19:43.451 "num_blocks": 65536, 00:19:43.451 "uuid": "f1c78461-6809-4973-b158-020f0e97a346", 00:19:43.451 "assigned_rate_limits": { 00:19:43.451 "rw_ios_per_sec": 0, 00:19:43.451 "rw_mbytes_per_sec": 0, 00:19:43.451 "r_mbytes_per_sec": 0, 00:19:43.451 "w_mbytes_per_sec": 0 00:19:43.451 }, 00:19:43.451 "claimed": true, 00:19:43.451 "claim_type": "exclusive_write", 00:19:43.451 "zoned": false, 00:19:43.451 "supported_io_types": { 00:19:43.451 "read": true, 00:19:43.451 "write": true, 00:19:43.451 "unmap": true, 00:19:43.451 "flush": true, 00:19:43.451 "reset": true, 00:19:43.452 "nvme_admin": false, 00:19:43.452 "nvme_io": false, 00:19:43.452 "nvme_io_md": false, 00:19:43.452 "write_zeroes": true, 00:19:43.452 "zcopy": true, 00:19:43.452 "get_zone_info": false, 00:19:43.452 "zone_management": false, 00:19:43.452 "zone_append": false, 00:19:43.452 "compare": false, 00:19:43.452 "compare_and_write": false, 00:19:43.452 "abort": true, 00:19:43.452 "seek_hole": false, 00:19:43.452 "seek_data": false, 00:19:43.452 "copy": true, 00:19:43.452 "nvme_iov_md": false 00:19:43.452 }, 00:19:43.452 "memory_domains": [ 00:19:43.452 { 00:19:43.452 "dma_device_id": "system", 00:19:43.452 "dma_device_type": 1 00:19:43.452 }, 00:19:43.452 { 00:19:43.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.452 "dma_device_type": 2 00:19:43.452 } 00:19:43.452 ], 00:19:43.452 "driver_specific": {} 00:19:43.452 } 00:19:43.452 ] 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.452 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.710 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.710 "name": "Existed_Raid", 00:19:43.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.710 "strip_size_kb": 64, 00:19:43.710 "state": "configuring", 00:19:43.710 "raid_level": "raid5f", 00:19:43.710 "superblock": false, 00:19:43.710 "num_base_bdevs": 3, 00:19:43.710 "num_base_bdevs_discovered": 2, 00:19:43.710 "num_base_bdevs_operational": 3, 00:19:43.710 "base_bdevs_list": [ 00:19:43.710 { 00:19:43.710 "name": "BaseBdev1", 00:19:43.710 "uuid": "f1c78461-6809-4973-b158-020f0e97a346", 00:19:43.710 "is_configured": true, 00:19:43.710 "data_offset": 0, 00:19:43.710 "data_size": 65536 00:19:43.710 }, 00:19:43.710 { 00:19:43.710 "name": null, 00:19:43.710 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:43.710 "is_configured": false, 00:19:43.710 "data_offset": 0, 00:19:43.710 "data_size": 65536 00:19:43.710 }, 00:19:43.710 { 00:19:43.710 "name": "BaseBdev3", 00:19:43.710 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:43.710 "is_configured": true, 00:19:43.710 "data_offset": 0, 00:19:43.710 "data_size": 65536 00:19:43.710 } 00:19:43.710 ] 00:19:43.710 }' 00:19:43.710 10:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.710 10:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.969 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.969 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.969 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.969 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:43.969 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.969 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:43.969 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:43.969 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.969 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.969 [2024-10-30 10:48:05.435025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.229 "name": "Existed_Raid", 00:19:44.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.229 "strip_size_kb": 64, 00:19:44.229 "state": "configuring", 00:19:44.229 "raid_level": "raid5f", 00:19:44.229 "superblock": false, 00:19:44.229 "num_base_bdevs": 3, 00:19:44.229 "num_base_bdevs_discovered": 1, 00:19:44.229 "num_base_bdevs_operational": 3, 00:19:44.229 "base_bdevs_list": [ 00:19:44.229 { 00:19:44.229 "name": "BaseBdev1", 00:19:44.229 "uuid": "f1c78461-6809-4973-b158-020f0e97a346", 00:19:44.229 "is_configured": true, 00:19:44.229 "data_offset": 0, 00:19:44.229 "data_size": 65536 00:19:44.229 }, 00:19:44.229 { 00:19:44.229 "name": null, 00:19:44.229 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:44.229 "is_configured": false, 00:19:44.229 "data_offset": 0, 00:19:44.229 "data_size": 65536 00:19:44.229 }, 00:19:44.229 { 00:19:44.229 "name": null, 00:19:44.229 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:44.229 "is_configured": false, 00:19:44.229 "data_offset": 0, 00:19:44.229 "data_size": 65536 00:19:44.229 } 00:19:44.229 ] 00:19:44.229 }' 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.229 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.797 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.797 10:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:44.797 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.797 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.797 10:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.797 [2024-10-30 10:48:06.011238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.797 "name": "Existed_Raid", 00:19:44.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.797 "strip_size_kb": 64, 00:19:44.797 "state": "configuring", 00:19:44.797 "raid_level": "raid5f", 00:19:44.797 "superblock": false, 00:19:44.797 "num_base_bdevs": 3, 00:19:44.797 "num_base_bdevs_discovered": 2, 00:19:44.797 "num_base_bdevs_operational": 3, 00:19:44.797 "base_bdevs_list": [ 00:19:44.797 { 00:19:44.797 "name": "BaseBdev1", 00:19:44.797 "uuid": "f1c78461-6809-4973-b158-020f0e97a346", 00:19:44.797 "is_configured": true, 00:19:44.797 "data_offset": 0, 00:19:44.797 "data_size": 65536 00:19:44.797 }, 00:19:44.797 { 00:19:44.797 "name": null, 00:19:44.797 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:44.797 "is_configured": false, 00:19:44.797 "data_offset": 0, 00:19:44.797 "data_size": 65536 00:19:44.797 }, 00:19:44.797 { 00:19:44.797 "name": "BaseBdev3", 00:19:44.797 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:44.797 "is_configured": true, 00:19:44.797 "data_offset": 0, 00:19:44.797 "data_size": 65536 00:19:44.797 } 00:19:44.797 ] 00:19:44.797 }' 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.797 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.055 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.055 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.055 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:45.056 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.314 [2024-10-30 10:48:06.575677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.314 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.314 "name": "Existed_Raid", 00:19:45.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.314 "strip_size_kb": 64, 00:19:45.314 "state": "configuring", 00:19:45.314 "raid_level": "raid5f", 00:19:45.314 "superblock": false, 00:19:45.314 "num_base_bdevs": 3, 00:19:45.315 "num_base_bdevs_discovered": 1, 00:19:45.315 "num_base_bdevs_operational": 3, 00:19:45.315 "base_bdevs_list": [ 00:19:45.315 { 00:19:45.315 "name": null, 00:19:45.315 "uuid": "f1c78461-6809-4973-b158-020f0e97a346", 00:19:45.315 "is_configured": false, 00:19:45.315 "data_offset": 0, 00:19:45.315 "data_size": 65536 00:19:45.315 }, 00:19:45.315 { 00:19:45.315 "name": null, 00:19:45.315 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:45.315 "is_configured": false, 00:19:45.315 "data_offset": 0, 00:19:45.315 "data_size": 65536 00:19:45.315 }, 00:19:45.315 { 00:19:45.315 "name": "BaseBdev3", 00:19:45.315 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:45.315 "is_configured": true, 00:19:45.315 "data_offset": 0, 00:19:45.315 "data_size": 65536 00:19:45.315 } 00:19:45.315 ] 00:19:45.315 }' 00:19:45.315 10:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.315 10:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.882 [2024-10-30 10:48:07.225872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.882 "name": "Existed_Raid", 00:19:45.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.882 "strip_size_kb": 64, 00:19:45.882 "state": "configuring", 00:19:45.882 "raid_level": "raid5f", 00:19:45.882 "superblock": false, 00:19:45.882 "num_base_bdevs": 3, 00:19:45.882 "num_base_bdevs_discovered": 2, 00:19:45.882 "num_base_bdevs_operational": 3, 00:19:45.882 "base_bdevs_list": [ 00:19:45.882 { 00:19:45.882 "name": null, 00:19:45.882 "uuid": "f1c78461-6809-4973-b158-020f0e97a346", 00:19:45.882 "is_configured": false, 00:19:45.882 "data_offset": 0, 00:19:45.882 "data_size": 65536 00:19:45.882 }, 00:19:45.882 { 00:19:45.882 "name": "BaseBdev2", 00:19:45.882 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:45.882 "is_configured": true, 00:19:45.882 "data_offset": 0, 00:19:45.882 "data_size": 65536 00:19:45.882 }, 00:19:45.882 { 00:19:45.882 "name": "BaseBdev3", 00:19:45.882 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:45.882 "is_configured": true, 00:19:45.882 "data_offset": 0, 00:19:45.882 "data_size": 65536 00:19:45.882 } 00:19:45.882 ] 00:19:45.882 }' 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.882 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f1c78461-6809-4973-b158-020f0e97a346 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.490 [2024-10-30 10:48:07.892957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:46.490 [2024-10-30 10:48:07.893078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:46.490 [2024-10-30 10:48:07.893095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:46.490 [2024-10-30 10:48:07.893433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:46.490 [2024-10-30 10:48:07.898426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:46.490 NewBaseBdev 00:19:46.490 [2024-10-30 10:48:07.898640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:46.490 [2024-10-30 10:48:07.899027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.490 [ 00:19:46.490 { 00:19:46.490 "name": "NewBaseBdev", 00:19:46.490 "aliases": [ 00:19:46.490 "f1c78461-6809-4973-b158-020f0e97a346" 00:19:46.490 ], 00:19:46.490 "product_name": "Malloc disk", 00:19:46.490 "block_size": 512, 00:19:46.490 "num_blocks": 65536, 00:19:46.490 "uuid": "f1c78461-6809-4973-b158-020f0e97a346", 00:19:46.490 "assigned_rate_limits": { 00:19:46.490 "rw_ios_per_sec": 0, 00:19:46.490 "rw_mbytes_per_sec": 0, 00:19:46.490 "r_mbytes_per_sec": 0, 00:19:46.490 "w_mbytes_per_sec": 0 00:19:46.490 }, 00:19:46.490 "claimed": true, 00:19:46.490 "claim_type": "exclusive_write", 00:19:46.490 "zoned": false, 00:19:46.490 "supported_io_types": { 00:19:46.490 "read": true, 00:19:46.490 "write": true, 00:19:46.490 "unmap": true, 00:19:46.490 "flush": true, 00:19:46.490 "reset": true, 00:19:46.490 "nvme_admin": false, 00:19:46.490 "nvme_io": false, 00:19:46.490 "nvme_io_md": false, 00:19:46.490 "write_zeroes": true, 00:19:46.490 "zcopy": true, 00:19:46.490 "get_zone_info": false, 00:19:46.490 "zone_management": false, 00:19:46.490 "zone_append": false, 00:19:46.490 "compare": false, 00:19:46.490 "compare_and_write": false, 00:19:46.490 "abort": true, 00:19:46.490 "seek_hole": false, 00:19:46.490 "seek_data": false, 00:19:46.490 "copy": true, 00:19:46.490 "nvme_iov_md": false 00:19:46.490 }, 00:19:46.490 "memory_domains": [ 00:19:46.490 { 00:19:46.490 "dma_device_id": "system", 00:19:46.490 "dma_device_type": 1 00:19:46.490 }, 00:19:46.490 { 00:19:46.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.490 "dma_device_type": 2 00:19:46.490 } 00:19:46.490 ], 00:19:46.490 "driver_specific": {} 00:19:46.490 } 00:19:46.490 ] 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.490 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.757 10:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.757 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.757 "name": "Existed_Raid", 00:19:46.757 "uuid": "bb72b20b-b654-42ad-958a-1c9af9098995", 00:19:46.757 "strip_size_kb": 64, 00:19:46.757 "state": "online", 00:19:46.757 "raid_level": "raid5f", 00:19:46.757 "superblock": false, 00:19:46.757 "num_base_bdevs": 3, 00:19:46.757 "num_base_bdevs_discovered": 3, 00:19:46.757 "num_base_bdevs_operational": 3, 00:19:46.757 "base_bdevs_list": [ 00:19:46.757 { 00:19:46.757 "name": "NewBaseBdev", 00:19:46.757 "uuid": "f1c78461-6809-4973-b158-020f0e97a346", 00:19:46.757 "is_configured": true, 00:19:46.757 "data_offset": 0, 00:19:46.757 "data_size": 65536 00:19:46.757 }, 00:19:46.757 { 00:19:46.757 "name": "BaseBdev2", 00:19:46.757 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:46.757 "is_configured": true, 00:19:46.757 "data_offset": 0, 00:19:46.757 "data_size": 65536 00:19:46.757 }, 00:19:46.757 { 00:19:46.757 "name": "BaseBdev3", 00:19:46.757 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:46.757 "is_configured": true, 00:19:46.757 "data_offset": 0, 00:19:46.757 "data_size": 65536 00:19:46.757 } 00:19:46.757 ] 00:19:46.757 }' 00:19:46.757 10:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.757 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.016 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:47.016 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:47.016 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:47.016 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:47.016 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:47.016 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:47.275 [2024-10-30 10:48:08.493021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:47.275 "name": "Existed_Raid", 00:19:47.275 "aliases": [ 00:19:47.275 "bb72b20b-b654-42ad-958a-1c9af9098995" 00:19:47.275 ], 00:19:47.275 "product_name": "Raid Volume", 00:19:47.275 "block_size": 512, 00:19:47.275 "num_blocks": 131072, 00:19:47.275 "uuid": "bb72b20b-b654-42ad-958a-1c9af9098995", 00:19:47.275 "assigned_rate_limits": { 00:19:47.275 "rw_ios_per_sec": 0, 00:19:47.275 "rw_mbytes_per_sec": 0, 00:19:47.275 "r_mbytes_per_sec": 0, 00:19:47.275 "w_mbytes_per_sec": 0 00:19:47.275 }, 00:19:47.275 "claimed": false, 00:19:47.275 "zoned": false, 00:19:47.275 "supported_io_types": { 00:19:47.275 "read": true, 00:19:47.275 "write": true, 00:19:47.275 "unmap": false, 00:19:47.275 "flush": false, 00:19:47.275 "reset": true, 00:19:47.275 "nvme_admin": false, 00:19:47.275 "nvme_io": false, 00:19:47.275 "nvme_io_md": false, 00:19:47.275 "write_zeroes": true, 00:19:47.275 "zcopy": false, 00:19:47.275 "get_zone_info": false, 00:19:47.275 "zone_management": false, 00:19:47.275 "zone_append": false, 00:19:47.275 "compare": false, 00:19:47.275 "compare_and_write": false, 00:19:47.275 "abort": false, 00:19:47.275 "seek_hole": false, 00:19:47.275 "seek_data": false, 00:19:47.275 "copy": false, 00:19:47.275 "nvme_iov_md": false 00:19:47.275 }, 00:19:47.275 "driver_specific": { 00:19:47.275 "raid": { 00:19:47.275 "uuid": "bb72b20b-b654-42ad-958a-1c9af9098995", 00:19:47.275 "strip_size_kb": 64, 00:19:47.275 "state": "online", 00:19:47.275 "raid_level": "raid5f", 00:19:47.275 "superblock": false, 00:19:47.275 "num_base_bdevs": 3, 00:19:47.275 "num_base_bdevs_discovered": 3, 00:19:47.275 "num_base_bdevs_operational": 3, 00:19:47.275 "base_bdevs_list": [ 00:19:47.275 { 00:19:47.275 "name": "NewBaseBdev", 00:19:47.275 "uuid": "f1c78461-6809-4973-b158-020f0e97a346", 00:19:47.275 "is_configured": true, 00:19:47.275 "data_offset": 0, 00:19:47.275 "data_size": 65536 00:19:47.275 }, 00:19:47.275 { 00:19:47.275 "name": "BaseBdev2", 00:19:47.275 "uuid": "25d32c8d-da89-4d04-b8b5-0447659d4037", 00:19:47.275 "is_configured": true, 00:19:47.275 "data_offset": 0, 00:19:47.275 "data_size": 65536 00:19:47.275 }, 00:19:47.275 { 00:19:47.275 "name": "BaseBdev3", 00:19:47.275 "uuid": "be5ffea5-ab64-4da4-b919-665e20fa4508", 00:19:47.275 "is_configured": true, 00:19:47.275 "data_offset": 0, 00:19:47.275 "data_size": 65536 00:19:47.275 } 00:19:47.275 ] 00:19:47.275 } 00:19:47.275 } 00:19:47.275 }' 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:47.275 BaseBdev2 00:19:47.275 BaseBdev3' 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.275 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:47.534 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.535 [2024-10-30 10:48:08.800850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:47.535 [2024-10-30 10:48:08.801052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.535 [2024-10-30 10:48:08.801278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.535 [2024-10-30 10:48:08.801738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.535 [2024-10-30 10:48:08.801880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80370 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80370 ']' 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80370 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80370 00:19:47.535 killing process with pid 80370 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80370' 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80370 00:19:47.535 [2024-10-30 10:48:08.840332] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:47.535 10:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80370 00:19:47.793 [2024-10-30 10:48:09.101906] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:48.735 00:19:48.735 real 0m11.898s 00:19:48.735 user 0m19.815s 00:19:48.735 sys 0m1.633s 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.735 ************************************ 00:19:48.735 END TEST raid5f_state_function_test 00:19:48.735 ************************************ 00:19:48.735 10:48:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:19:48.735 10:48:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:48.735 10:48:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:48.735 10:48:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:48.735 ************************************ 00:19:48.735 START TEST raid5f_state_function_test_sb 00:19:48.735 ************************************ 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:48.735 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81004 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81004' 00:19:48.995 Process raid pid: 81004 00:19:48.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81004 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 81004 ']' 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:48.995 10:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.995 [2024-10-30 10:48:10.322253] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:19:48.995 [2024-10-30 10:48:10.322431] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.254 [2024-10-30 10:48:10.514425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.254 [2024-10-30 10:48:10.645311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.514 [2024-10-30 10:48:10.854635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:49.514 [2024-10-30 10:48:10.854921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.083 [2024-10-30 10:48:11.307719] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:50.083 [2024-10-30 10:48:11.307823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:50.083 [2024-10-30 10:48:11.307874] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:50.083 [2024-10-30 10:48:11.308082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:50.083 [2024-10-30 10:48:11.308229] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:50.083 [2024-10-30 10:48:11.308292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.083 "name": "Existed_Raid", 00:19:50.083 "uuid": "3b8b8148-7f0e-4c4c-bf7e-82d98ad8e1bd", 00:19:50.083 "strip_size_kb": 64, 00:19:50.083 "state": "configuring", 00:19:50.083 "raid_level": "raid5f", 00:19:50.083 "superblock": true, 00:19:50.083 "num_base_bdevs": 3, 00:19:50.083 "num_base_bdevs_discovered": 0, 00:19:50.083 "num_base_bdevs_operational": 3, 00:19:50.083 "base_bdevs_list": [ 00:19:50.083 { 00:19:50.083 "name": "BaseBdev1", 00:19:50.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.083 "is_configured": false, 00:19:50.083 "data_offset": 0, 00:19:50.083 "data_size": 0 00:19:50.083 }, 00:19:50.083 { 00:19:50.083 "name": "BaseBdev2", 00:19:50.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.083 "is_configured": false, 00:19:50.083 "data_offset": 0, 00:19:50.083 "data_size": 0 00:19:50.083 }, 00:19:50.083 { 00:19:50.083 "name": "BaseBdev3", 00:19:50.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.083 "is_configured": false, 00:19:50.083 "data_offset": 0, 00:19:50.083 "data_size": 0 00:19:50.083 } 00:19:50.083 ] 00:19:50.083 }' 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.083 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.652 [2024-10-30 10:48:11.875845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:50.652 [2024-10-30 10:48:11.876058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.652 [2024-10-30 10:48:11.883809] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:50.652 [2024-10-30 10:48:11.884027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:50.652 [2024-10-30 10:48:11.884151] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:50.652 [2024-10-30 10:48:11.884286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:50.652 [2024-10-30 10:48:11.884394] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:50.652 [2024-10-30 10:48:11.884434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.652 [2024-10-30 10:48:11.928512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:50.652 BaseBdev1 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.652 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.652 [ 00:19:50.652 { 00:19:50.652 "name": "BaseBdev1", 00:19:50.652 "aliases": [ 00:19:50.652 "719ea3fd-289c-43d5-9f0d-ebf0b4460654" 00:19:50.652 ], 00:19:50.652 "product_name": "Malloc disk", 00:19:50.652 "block_size": 512, 00:19:50.652 "num_blocks": 65536, 00:19:50.652 "uuid": "719ea3fd-289c-43d5-9f0d-ebf0b4460654", 00:19:50.652 "assigned_rate_limits": { 00:19:50.652 "rw_ios_per_sec": 0, 00:19:50.652 "rw_mbytes_per_sec": 0, 00:19:50.652 "r_mbytes_per_sec": 0, 00:19:50.652 "w_mbytes_per_sec": 0 00:19:50.652 }, 00:19:50.652 "claimed": true, 00:19:50.652 "claim_type": "exclusive_write", 00:19:50.652 "zoned": false, 00:19:50.652 "supported_io_types": { 00:19:50.652 "read": true, 00:19:50.652 "write": true, 00:19:50.652 "unmap": true, 00:19:50.652 "flush": true, 00:19:50.652 "reset": true, 00:19:50.652 "nvme_admin": false, 00:19:50.652 "nvme_io": false, 00:19:50.652 "nvme_io_md": false, 00:19:50.652 "write_zeroes": true, 00:19:50.652 "zcopy": true, 00:19:50.652 "get_zone_info": false, 00:19:50.652 "zone_management": false, 00:19:50.652 "zone_append": false, 00:19:50.652 "compare": false, 00:19:50.652 "compare_and_write": false, 00:19:50.652 "abort": true, 00:19:50.652 "seek_hole": false, 00:19:50.652 "seek_data": false, 00:19:50.652 "copy": true, 00:19:50.652 "nvme_iov_md": false 00:19:50.652 }, 00:19:50.652 "memory_domains": [ 00:19:50.652 { 00:19:50.652 "dma_device_id": "system", 00:19:50.652 "dma_device_type": 1 00:19:50.652 }, 00:19:50.652 { 00:19:50.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.652 "dma_device_type": 2 00:19:50.652 } 00:19:50.653 ], 00:19:50.653 "driver_specific": {} 00:19:50.653 } 00:19:50.653 ] 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.653 10:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.653 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.653 "name": "Existed_Raid", 00:19:50.653 "uuid": "47d890af-270e-4c4c-8440-c6d273a7e388", 00:19:50.653 "strip_size_kb": 64, 00:19:50.653 "state": "configuring", 00:19:50.653 "raid_level": "raid5f", 00:19:50.653 "superblock": true, 00:19:50.653 "num_base_bdevs": 3, 00:19:50.653 "num_base_bdevs_discovered": 1, 00:19:50.653 "num_base_bdevs_operational": 3, 00:19:50.653 "base_bdevs_list": [ 00:19:50.653 { 00:19:50.653 "name": "BaseBdev1", 00:19:50.653 "uuid": "719ea3fd-289c-43d5-9f0d-ebf0b4460654", 00:19:50.653 "is_configured": true, 00:19:50.653 "data_offset": 2048, 00:19:50.653 "data_size": 63488 00:19:50.653 }, 00:19:50.653 { 00:19:50.653 "name": "BaseBdev2", 00:19:50.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.653 "is_configured": false, 00:19:50.653 "data_offset": 0, 00:19:50.653 "data_size": 0 00:19:50.653 }, 00:19:50.653 { 00:19:50.653 "name": "BaseBdev3", 00:19:50.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.653 "is_configured": false, 00:19:50.653 "data_offset": 0, 00:19:50.653 "data_size": 0 00:19:50.653 } 00:19:50.653 ] 00:19:50.653 }' 00:19:50.653 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.653 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.221 [2024-10-30 10:48:12.480784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.221 [2024-10-30 10:48:12.480872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.221 [2024-10-30 10:48:12.488819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.221 [2024-10-30 10:48:12.491626] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:51.221 [2024-10-30 10:48:12.491696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:51.221 [2024-10-30 10:48:12.491713] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:51.221 [2024-10-30 10:48:12.491727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.221 "name": "Existed_Raid", 00:19:51.221 "uuid": "cf22840b-4be2-45ae-a0b3-075040157253", 00:19:51.221 "strip_size_kb": 64, 00:19:51.221 "state": "configuring", 00:19:51.221 "raid_level": "raid5f", 00:19:51.221 "superblock": true, 00:19:51.221 "num_base_bdevs": 3, 00:19:51.221 "num_base_bdevs_discovered": 1, 00:19:51.221 "num_base_bdevs_operational": 3, 00:19:51.221 "base_bdevs_list": [ 00:19:51.221 { 00:19:51.221 "name": "BaseBdev1", 00:19:51.221 "uuid": "719ea3fd-289c-43d5-9f0d-ebf0b4460654", 00:19:51.221 "is_configured": true, 00:19:51.221 "data_offset": 2048, 00:19:51.221 "data_size": 63488 00:19:51.221 }, 00:19:51.221 { 00:19:51.221 "name": "BaseBdev2", 00:19:51.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.221 "is_configured": false, 00:19:51.221 "data_offset": 0, 00:19:51.221 "data_size": 0 00:19:51.221 }, 00:19:51.221 { 00:19:51.221 "name": "BaseBdev3", 00:19:51.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.221 "is_configured": false, 00:19:51.221 "data_offset": 0, 00:19:51.221 "data_size": 0 00:19:51.221 } 00:19:51.221 ] 00:19:51.221 }' 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.221 10:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.801 [2024-10-30 10:48:13.046955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:51.801 BaseBdev2 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.801 [ 00:19:51.801 { 00:19:51.801 "name": "BaseBdev2", 00:19:51.801 "aliases": [ 00:19:51.801 "b674d976-df92-45ce-9094-0755ad692144" 00:19:51.801 ], 00:19:51.801 "product_name": "Malloc disk", 00:19:51.801 "block_size": 512, 00:19:51.801 "num_blocks": 65536, 00:19:51.801 "uuid": "b674d976-df92-45ce-9094-0755ad692144", 00:19:51.801 "assigned_rate_limits": { 00:19:51.801 "rw_ios_per_sec": 0, 00:19:51.801 "rw_mbytes_per_sec": 0, 00:19:51.801 "r_mbytes_per_sec": 0, 00:19:51.801 "w_mbytes_per_sec": 0 00:19:51.801 }, 00:19:51.801 "claimed": true, 00:19:51.801 "claim_type": "exclusive_write", 00:19:51.801 "zoned": false, 00:19:51.801 "supported_io_types": { 00:19:51.801 "read": true, 00:19:51.801 "write": true, 00:19:51.801 "unmap": true, 00:19:51.801 "flush": true, 00:19:51.801 "reset": true, 00:19:51.801 "nvme_admin": false, 00:19:51.801 "nvme_io": false, 00:19:51.801 "nvme_io_md": false, 00:19:51.801 "write_zeroes": true, 00:19:51.801 "zcopy": true, 00:19:51.801 "get_zone_info": false, 00:19:51.801 "zone_management": false, 00:19:51.801 "zone_append": false, 00:19:51.801 "compare": false, 00:19:51.801 "compare_and_write": false, 00:19:51.801 "abort": true, 00:19:51.801 "seek_hole": false, 00:19:51.801 "seek_data": false, 00:19:51.801 "copy": true, 00:19:51.801 "nvme_iov_md": false 00:19:51.801 }, 00:19:51.801 "memory_domains": [ 00:19:51.801 { 00:19:51.801 "dma_device_id": "system", 00:19:51.801 "dma_device_type": 1 00:19:51.801 }, 00:19:51.801 { 00:19:51.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.801 "dma_device_type": 2 00:19:51.801 } 00:19:51.801 ], 00:19:51.801 "driver_specific": {} 00:19:51.801 } 00:19:51.801 ] 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.801 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.802 "name": "Existed_Raid", 00:19:51.802 "uuid": "cf22840b-4be2-45ae-a0b3-075040157253", 00:19:51.802 "strip_size_kb": 64, 00:19:51.802 "state": "configuring", 00:19:51.802 "raid_level": "raid5f", 00:19:51.802 "superblock": true, 00:19:51.802 "num_base_bdevs": 3, 00:19:51.802 "num_base_bdevs_discovered": 2, 00:19:51.802 "num_base_bdevs_operational": 3, 00:19:51.802 "base_bdevs_list": [ 00:19:51.802 { 00:19:51.802 "name": "BaseBdev1", 00:19:51.802 "uuid": "719ea3fd-289c-43d5-9f0d-ebf0b4460654", 00:19:51.802 "is_configured": true, 00:19:51.802 "data_offset": 2048, 00:19:51.802 "data_size": 63488 00:19:51.802 }, 00:19:51.802 { 00:19:51.802 "name": "BaseBdev2", 00:19:51.802 "uuid": "b674d976-df92-45ce-9094-0755ad692144", 00:19:51.802 "is_configured": true, 00:19:51.802 "data_offset": 2048, 00:19:51.802 "data_size": 63488 00:19:51.802 }, 00:19:51.802 { 00:19:51.802 "name": "BaseBdev3", 00:19:51.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.802 "is_configured": false, 00:19:51.802 "data_offset": 0, 00:19:51.802 "data_size": 0 00:19:51.802 } 00:19:51.802 ] 00:19:51.802 }' 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.802 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.370 [2024-10-30 10:48:13.672792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:52.370 BaseBdev3 00:19:52.370 [2024-10-30 10:48:13.673456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:52.370 [2024-10-30 10:48:13.673495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:52.370 [2024-10-30 10:48:13.673838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.370 [2024-10-30 10:48:13.679178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:52.370 [2024-10-30 10:48:13.679337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:52.370 [2024-10-30 10:48:13.679844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.370 [ 00:19:52.370 { 00:19:52.370 "name": "BaseBdev3", 00:19:52.370 "aliases": [ 00:19:52.370 "7e8b42a6-584d-416d-b148-aa984c238e21" 00:19:52.370 ], 00:19:52.370 "product_name": "Malloc disk", 00:19:52.370 "block_size": 512, 00:19:52.370 "num_blocks": 65536, 00:19:52.370 "uuid": "7e8b42a6-584d-416d-b148-aa984c238e21", 00:19:52.370 "assigned_rate_limits": { 00:19:52.370 "rw_ios_per_sec": 0, 00:19:52.370 "rw_mbytes_per_sec": 0, 00:19:52.370 "r_mbytes_per_sec": 0, 00:19:52.370 "w_mbytes_per_sec": 0 00:19:52.370 }, 00:19:52.370 "claimed": true, 00:19:52.370 "claim_type": "exclusive_write", 00:19:52.370 "zoned": false, 00:19:52.370 "supported_io_types": { 00:19:52.370 "read": true, 00:19:52.370 "write": true, 00:19:52.370 "unmap": true, 00:19:52.370 "flush": true, 00:19:52.370 "reset": true, 00:19:52.370 "nvme_admin": false, 00:19:52.370 "nvme_io": false, 00:19:52.370 "nvme_io_md": false, 00:19:52.370 "write_zeroes": true, 00:19:52.370 "zcopy": true, 00:19:52.370 "get_zone_info": false, 00:19:52.370 "zone_management": false, 00:19:52.370 "zone_append": false, 00:19:52.370 "compare": false, 00:19:52.370 "compare_and_write": false, 00:19:52.370 "abort": true, 00:19:52.370 "seek_hole": false, 00:19:52.370 "seek_data": false, 00:19:52.370 "copy": true, 00:19:52.370 "nvme_iov_md": false 00:19:52.370 }, 00:19:52.370 "memory_domains": [ 00:19:52.370 { 00:19:52.370 "dma_device_id": "system", 00:19:52.370 "dma_device_type": 1 00:19:52.370 }, 00:19:52.370 { 00:19:52.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:52.370 "dma_device_type": 2 00:19:52.370 } 00:19:52.370 ], 00:19:52.370 "driver_specific": {} 00:19:52.370 } 00:19:52.370 ] 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.370 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.370 "name": "Existed_Raid", 00:19:52.370 "uuid": "cf22840b-4be2-45ae-a0b3-075040157253", 00:19:52.370 "strip_size_kb": 64, 00:19:52.370 "state": "online", 00:19:52.370 "raid_level": "raid5f", 00:19:52.370 "superblock": true, 00:19:52.370 "num_base_bdevs": 3, 00:19:52.370 "num_base_bdevs_discovered": 3, 00:19:52.370 "num_base_bdevs_operational": 3, 00:19:52.370 "base_bdevs_list": [ 00:19:52.370 { 00:19:52.370 "name": "BaseBdev1", 00:19:52.371 "uuid": "719ea3fd-289c-43d5-9f0d-ebf0b4460654", 00:19:52.371 "is_configured": true, 00:19:52.371 "data_offset": 2048, 00:19:52.371 "data_size": 63488 00:19:52.371 }, 00:19:52.371 { 00:19:52.371 "name": "BaseBdev2", 00:19:52.371 "uuid": "b674d976-df92-45ce-9094-0755ad692144", 00:19:52.371 "is_configured": true, 00:19:52.371 "data_offset": 2048, 00:19:52.371 "data_size": 63488 00:19:52.371 }, 00:19:52.371 { 00:19:52.371 "name": "BaseBdev3", 00:19:52.371 "uuid": "7e8b42a6-584d-416d-b148-aa984c238e21", 00:19:52.371 "is_configured": true, 00:19:52.371 "data_offset": 2048, 00:19:52.371 "data_size": 63488 00:19:52.371 } 00:19:52.371 ] 00:19:52.371 }' 00:19:52.371 10:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.371 10:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:52.939 [2024-10-30 10:48:14.217964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:52.939 "name": "Existed_Raid", 00:19:52.939 "aliases": [ 00:19:52.939 "cf22840b-4be2-45ae-a0b3-075040157253" 00:19:52.939 ], 00:19:52.939 "product_name": "Raid Volume", 00:19:52.939 "block_size": 512, 00:19:52.939 "num_blocks": 126976, 00:19:52.939 "uuid": "cf22840b-4be2-45ae-a0b3-075040157253", 00:19:52.939 "assigned_rate_limits": { 00:19:52.939 "rw_ios_per_sec": 0, 00:19:52.939 "rw_mbytes_per_sec": 0, 00:19:52.939 "r_mbytes_per_sec": 0, 00:19:52.939 "w_mbytes_per_sec": 0 00:19:52.939 }, 00:19:52.939 "claimed": false, 00:19:52.939 "zoned": false, 00:19:52.939 "supported_io_types": { 00:19:52.939 "read": true, 00:19:52.939 "write": true, 00:19:52.939 "unmap": false, 00:19:52.939 "flush": false, 00:19:52.939 "reset": true, 00:19:52.939 "nvme_admin": false, 00:19:52.939 "nvme_io": false, 00:19:52.939 "nvme_io_md": false, 00:19:52.939 "write_zeroes": true, 00:19:52.939 "zcopy": false, 00:19:52.939 "get_zone_info": false, 00:19:52.939 "zone_management": false, 00:19:52.939 "zone_append": false, 00:19:52.939 "compare": false, 00:19:52.939 "compare_and_write": false, 00:19:52.939 "abort": false, 00:19:52.939 "seek_hole": false, 00:19:52.939 "seek_data": false, 00:19:52.939 "copy": false, 00:19:52.939 "nvme_iov_md": false 00:19:52.939 }, 00:19:52.939 "driver_specific": { 00:19:52.939 "raid": { 00:19:52.939 "uuid": "cf22840b-4be2-45ae-a0b3-075040157253", 00:19:52.939 "strip_size_kb": 64, 00:19:52.939 "state": "online", 00:19:52.939 "raid_level": "raid5f", 00:19:52.939 "superblock": true, 00:19:52.939 "num_base_bdevs": 3, 00:19:52.939 "num_base_bdevs_discovered": 3, 00:19:52.939 "num_base_bdevs_operational": 3, 00:19:52.939 "base_bdevs_list": [ 00:19:52.939 { 00:19:52.939 "name": "BaseBdev1", 00:19:52.939 "uuid": "719ea3fd-289c-43d5-9f0d-ebf0b4460654", 00:19:52.939 "is_configured": true, 00:19:52.939 "data_offset": 2048, 00:19:52.939 "data_size": 63488 00:19:52.939 }, 00:19:52.939 { 00:19:52.939 "name": "BaseBdev2", 00:19:52.939 "uuid": "b674d976-df92-45ce-9094-0755ad692144", 00:19:52.939 "is_configured": true, 00:19:52.939 "data_offset": 2048, 00:19:52.939 "data_size": 63488 00:19:52.939 }, 00:19:52.939 { 00:19:52.939 "name": "BaseBdev3", 00:19:52.939 "uuid": "7e8b42a6-584d-416d-b148-aa984c238e21", 00:19:52.939 "is_configured": true, 00:19:52.939 "data_offset": 2048, 00:19:52.939 "data_size": 63488 00:19:52.939 } 00:19:52.939 ] 00:19:52.939 } 00:19:52.939 } 00:19:52.939 }' 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:52.939 BaseBdev2 00:19:52.939 BaseBdev3' 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:52.939 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.199 [2024-10-30 10:48:14.509832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.199 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.199 "name": "Existed_Raid", 00:19:53.199 "uuid": "cf22840b-4be2-45ae-a0b3-075040157253", 00:19:53.199 "strip_size_kb": 64, 00:19:53.199 "state": "online", 00:19:53.199 "raid_level": "raid5f", 00:19:53.199 "superblock": true, 00:19:53.199 "num_base_bdevs": 3, 00:19:53.199 "num_base_bdevs_discovered": 2, 00:19:53.199 "num_base_bdevs_operational": 2, 00:19:53.199 "base_bdevs_list": [ 00:19:53.199 { 00:19:53.199 "name": null, 00:19:53.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.199 "is_configured": false, 00:19:53.199 "data_offset": 0, 00:19:53.199 "data_size": 63488 00:19:53.199 }, 00:19:53.199 { 00:19:53.199 "name": "BaseBdev2", 00:19:53.199 "uuid": "b674d976-df92-45ce-9094-0755ad692144", 00:19:53.199 "is_configured": true, 00:19:53.200 "data_offset": 2048, 00:19:53.200 "data_size": 63488 00:19:53.200 }, 00:19:53.200 { 00:19:53.200 "name": "BaseBdev3", 00:19:53.200 "uuid": "7e8b42a6-584d-416d-b148-aa984c238e21", 00:19:53.200 "is_configured": true, 00:19:53.200 "data_offset": 2048, 00:19:53.200 "data_size": 63488 00:19:53.200 } 00:19:53.200 ] 00:19:53.200 }' 00:19:53.200 10:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.200 10:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.767 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.767 [2024-10-30 10:48:15.188426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:53.767 [2024-10-30 10:48:15.188613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.027 [2024-10-30 10:48:15.274711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.027 [2024-10-30 10:48:15.334765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:54.027 [2024-10-30 10:48:15.334829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.027 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.286 BaseBdev2 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.286 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 [ 00:19:54.287 { 00:19:54.287 "name": "BaseBdev2", 00:19:54.287 "aliases": [ 00:19:54.287 "6c776d98-cea3-4897-a82d-0120424eb200" 00:19:54.287 ], 00:19:54.287 "product_name": "Malloc disk", 00:19:54.287 "block_size": 512, 00:19:54.287 "num_blocks": 65536, 00:19:54.287 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:54.287 "assigned_rate_limits": { 00:19:54.287 "rw_ios_per_sec": 0, 00:19:54.287 "rw_mbytes_per_sec": 0, 00:19:54.287 "r_mbytes_per_sec": 0, 00:19:54.287 "w_mbytes_per_sec": 0 00:19:54.287 }, 00:19:54.287 "claimed": false, 00:19:54.287 "zoned": false, 00:19:54.287 "supported_io_types": { 00:19:54.287 "read": true, 00:19:54.287 "write": true, 00:19:54.287 "unmap": true, 00:19:54.287 "flush": true, 00:19:54.287 "reset": true, 00:19:54.287 "nvme_admin": false, 00:19:54.287 "nvme_io": false, 00:19:54.287 "nvme_io_md": false, 00:19:54.287 "write_zeroes": true, 00:19:54.287 "zcopy": true, 00:19:54.287 "get_zone_info": false, 00:19:54.287 "zone_management": false, 00:19:54.287 "zone_append": false, 00:19:54.287 "compare": false, 00:19:54.287 "compare_and_write": false, 00:19:54.287 "abort": true, 00:19:54.287 "seek_hole": false, 00:19:54.287 "seek_data": false, 00:19:54.287 "copy": true, 00:19:54.287 "nvme_iov_md": false 00:19:54.287 }, 00:19:54.287 "memory_domains": [ 00:19:54.287 { 00:19:54.287 "dma_device_id": "system", 00:19:54.287 "dma_device_type": 1 00:19:54.287 }, 00:19:54.287 { 00:19:54.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.287 "dma_device_type": 2 00:19:54.287 } 00:19:54.287 ], 00:19:54.287 "driver_specific": {} 00:19:54.287 } 00:19:54.287 ] 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 BaseBdev3 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 [ 00:19:54.287 { 00:19:54.287 "name": "BaseBdev3", 00:19:54.287 "aliases": [ 00:19:54.287 "0288b021-bc30-4621-a26d-70c28c2dd2ab" 00:19:54.287 ], 00:19:54.287 "product_name": "Malloc disk", 00:19:54.287 "block_size": 512, 00:19:54.287 "num_blocks": 65536, 00:19:54.287 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:54.287 "assigned_rate_limits": { 00:19:54.287 "rw_ios_per_sec": 0, 00:19:54.287 "rw_mbytes_per_sec": 0, 00:19:54.287 "r_mbytes_per_sec": 0, 00:19:54.287 "w_mbytes_per_sec": 0 00:19:54.287 }, 00:19:54.287 "claimed": false, 00:19:54.287 "zoned": false, 00:19:54.287 "supported_io_types": { 00:19:54.287 "read": true, 00:19:54.287 "write": true, 00:19:54.287 "unmap": true, 00:19:54.287 "flush": true, 00:19:54.287 "reset": true, 00:19:54.287 "nvme_admin": false, 00:19:54.287 "nvme_io": false, 00:19:54.287 "nvme_io_md": false, 00:19:54.287 "write_zeroes": true, 00:19:54.287 "zcopy": true, 00:19:54.287 "get_zone_info": false, 00:19:54.287 "zone_management": false, 00:19:54.287 "zone_append": false, 00:19:54.287 "compare": false, 00:19:54.287 "compare_and_write": false, 00:19:54.287 "abort": true, 00:19:54.287 "seek_hole": false, 00:19:54.287 "seek_data": false, 00:19:54.287 "copy": true, 00:19:54.287 "nvme_iov_md": false 00:19:54.287 }, 00:19:54.287 "memory_domains": [ 00:19:54.287 { 00:19:54.287 "dma_device_id": "system", 00:19:54.287 "dma_device_type": 1 00:19:54.287 }, 00:19:54.287 { 00:19:54.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.287 "dma_device_type": 2 00:19:54.287 } 00:19:54.287 ], 00:19:54.287 "driver_specific": {} 00:19:54.287 } 00:19:54.287 ] 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 [2024-10-30 10:48:15.633250] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:54.287 [2024-10-30 10:48:15.633305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:54.287 [2024-10-30 10:48:15.633336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.287 [2024-10-30 10:48:15.635718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.287 "name": "Existed_Raid", 00:19:54.287 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:54.287 "strip_size_kb": 64, 00:19:54.287 "state": "configuring", 00:19:54.287 "raid_level": "raid5f", 00:19:54.287 "superblock": true, 00:19:54.287 "num_base_bdevs": 3, 00:19:54.287 "num_base_bdevs_discovered": 2, 00:19:54.287 "num_base_bdevs_operational": 3, 00:19:54.287 "base_bdevs_list": [ 00:19:54.287 { 00:19:54.287 "name": "BaseBdev1", 00:19:54.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.287 "is_configured": false, 00:19:54.287 "data_offset": 0, 00:19:54.287 "data_size": 0 00:19:54.287 }, 00:19:54.287 { 00:19:54.287 "name": "BaseBdev2", 00:19:54.287 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:54.287 "is_configured": true, 00:19:54.287 "data_offset": 2048, 00:19:54.287 "data_size": 63488 00:19:54.287 }, 00:19:54.287 { 00:19:54.287 "name": "BaseBdev3", 00:19:54.287 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:54.287 "is_configured": true, 00:19:54.287 "data_offset": 2048, 00:19:54.287 "data_size": 63488 00:19:54.287 } 00:19:54.287 ] 00:19:54.287 }' 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.287 10:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.856 [2024-10-30 10:48:16.121446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.856 "name": "Existed_Raid", 00:19:54.856 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:54.856 "strip_size_kb": 64, 00:19:54.856 "state": "configuring", 00:19:54.856 "raid_level": "raid5f", 00:19:54.856 "superblock": true, 00:19:54.856 "num_base_bdevs": 3, 00:19:54.856 "num_base_bdevs_discovered": 1, 00:19:54.856 "num_base_bdevs_operational": 3, 00:19:54.856 "base_bdevs_list": [ 00:19:54.856 { 00:19:54.856 "name": "BaseBdev1", 00:19:54.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.856 "is_configured": false, 00:19:54.856 "data_offset": 0, 00:19:54.856 "data_size": 0 00:19:54.856 }, 00:19:54.856 { 00:19:54.856 "name": null, 00:19:54.856 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:54.856 "is_configured": false, 00:19:54.856 "data_offset": 0, 00:19:54.856 "data_size": 63488 00:19:54.856 }, 00:19:54.856 { 00:19:54.856 "name": "BaseBdev3", 00:19:54.856 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:54.856 "is_configured": true, 00:19:54.856 "data_offset": 2048, 00:19:54.856 "data_size": 63488 00:19:54.856 } 00:19:54.856 ] 00:19:54.856 }' 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.856 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.424 [2024-10-30 10:48:16.742513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.424 BaseBdev1 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.424 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.424 [ 00:19:55.424 { 00:19:55.424 "name": "BaseBdev1", 00:19:55.424 "aliases": [ 00:19:55.425 "192e2eb4-8a90-4068-8166-6eb06e6f9b89" 00:19:55.425 ], 00:19:55.425 "product_name": "Malloc disk", 00:19:55.425 "block_size": 512, 00:19:55.425 "num_blocks": 65536, 00:19:55.425 "uuid": "192e2eb4-8a90-4068-8166-6eb06e6f9b89", 00:19:55.425 "assigned_rate_limits": { 00:19:55.425 "rw_ios_per_sec": 0, 00:19:55.425 "rw_mbytes_per_sec": 0, 00:19:55.425 "r_mbytes_per_sec": 0, 00:19:55.425 "w_mbytes_per_sec": 0 00:19:55.425 }, 00:19:55.425 "claimed": true, 00:19:55.425 "claim_type": "exclusive_write", 00:19:55.425 "zoned": false, 00:19:55.425 "supported_io_types": { 00:19:55.425 "read": true, 00:19:55.425 "write": true, 00:19:55.425 "unmap": true, 00:19:55.425 "flush": true, 00:19:55.425 "reset": true, 00:19:55.425 "nvme_admin": false, 00:19:55.425 "nvme_io": false, 00:19:55.425 "nvme_io_md": false, 00:19:55.425 "write_zeroes": true, 00:19:55.425 "zcopy": true, 00:19:55.425 "get_zone_info": false, 00:19:55.425 "zone_management": false, 00:19:55.425 "zone_append": false, 00:19:55.425 "compare": false, 00:19:55.425 "compare_and_write": false, 00:19:55.425 "abort": true, 00:19:55.425 "seek_hole": false, 00:19:55.425 "seek_data": false, 00:19:55.425 "copy": true, 00:19:55.425 "nvme_iov_md": false 00:19:55.425 }, 00:19:55.425 "memory_domains": [ 00:19:55.425 { 00:19:55.425 "dma_device_id": "system", 00:19:55.425 "dma_device_type": 1 00:19:55.425 }, 00:19:55.425 { 00:19:55.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.425 "dma_device_type": 2 00:19:55.425 } 00:19:55.425 ], 00:19:55.425 "driver_specific": {} 00:19:55.425 } 00:19:55.425 ] 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.425 "name": "Existed_Raid", 00:19:55.425 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:55.425 "strip_size_kb": 64, 00:19:55.425 "state": "configuring", 00:19:55.425 "raid_level": "raid5f", 00:19:55.425 "superblock": true, 00:19:55.425 "num_base_bdevs": 3, 00:19:55.425 "num_base_bdevs_discovered": 2, 00:19:55.425 "num_base_bdevs_operational": 3, 00:19:55.425 "base_bdevs_list": [ 00:19:55.425 { 00:19:55.425 "name": "BaseBdev1", 00:19:55.425 "uuid": "192e2eb4-8a90-4068-8166-6eb06e6f9b89", 00:19:55.425 "is_configured": true, 00:19:55.425 "data_offset": 2048, 00:19:55.425 "data_size": 63488 00:19:55.425 }, 00:19:55.425 { 00:19:55.425 "name": null, 00:19:55.425 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:55.425 "is_configured": false, 00:19:55.425 "data_offset": 0, 00:19:55.425 "data_size": 63488 00:19:55.425 }, 00:19:55.425 { 00:19:55.425 "name": "BaseBdev3", 00:19:55.425 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:55.425 "is_configured": true, 00:19:55.425 "data_offset": 2048, 00:19:55.425 "data_size": 63488 00:19:55.425 } 00:19:55.425 ] 00:19:55.425 }' 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.425 10:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.993 [2024-10-30 10:48:17.318717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.993 "name": "Existed_Raid", 00:19:55.993 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:55.993 "strip_size_kb": 64, 00:19:55.993 "state": "configuring", 00:19:55.993 "raid_level": "raid5f", 00:19:55.993 "superblock": true, 00:19:55.993 "num_base_bdevs": 3, 00:19:55.993 "num_base_bdevs_discovered": 1, 00:19:55.993 "num_base_bdevs_operational": 3, 00:19:55.993 "base_bdevs_list": [ 00:19:55.993 { 00:19:55.993 "name": "BaseBdev1", 00:19:55.993 "uuid": "192e2eb4-8a90-4068-8166-6eb06e6f9b89", 00:19:55.993 "is_configured": true, 00:19:55.993 "data_offset": 2048, 00:19:55.993 "data_size": 63488 00:19:55.993 }, 00:19:55.993 { 00:19:55.993 "name": null, 00:19:55.993 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:55.993 "is_configured": false, 00:19:55.993 "data_offset": 0, 00:19:55.993 "data_size": 63488 00:19:55.993 }, 00:19:55.993 { 00:19:55.993 "name": null, 00:19:55.993 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:55.993 "is_configured": false, 00:19:55.993 "data_offset": 0, 00:19:55.993 "data_size": 63488 00:19:55.993 } 00:19:55.993 ] 00:19:55.993 }' 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.993 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.560 [2024-10-30 10:48:17.890948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.560 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.561 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.561 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.561 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.561 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.561 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.561 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.561 "name": "Existed_Raid", 00:19:56.561 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:56.561 "strip_size_kb": 64, 00:19:56.561 "state": "configuring", 00:19:56.561 "raid_level": "raid5f", 00:19:56.561 "superblock": true, 00:19:56.561 "num_base_bdevs": 3, 00:19:56.561 "num_base_bdevs_discovered": 2, 00:19:56.561 "num_base_bdevs_operational": 3, 00:19:56.561 "base_bdevs_list": [ 00:19:56.561 { 00:19:56.561 "name": "BaseBdev1", 00:19:56.561 "uuid": "192e2eb4-8a90-4068-8166-6eb06e6f9b89", 00:19:56.561 "is_configured": true, 00:19:56.561 "data_offset": 2048, 00:19:56.561 "data_size": 63488 00:19:56.561 }, 00:19:56.561 { 00:19:56.561 "name": null, 00:19:56.561 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:56.561 "is_configured": false, 00:19:56.561 "data_offset": 0, 00:19:56.561 "data_size": 63488 00:19:56.561 }, 00:19:56.561 { 00:19:56.561 "name": "BaseBdev3", 00:19:56.561 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:56.561 "is_configured": true, 00:19:56.561 "data_offset": 2048, 00:19:56.561 "data_size": 63488 00:19:56.561 } 00:19:56.561 ] 00:19:56.561 }' 00:19:56.561 10:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.561 10:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.172 [2024-10-30 10:48:18.479098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.172 "name": "Existed_Raid", 00:19:57.172 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:57.172 "strip_size_kb": 64, 00:19:57.172 "state": "configuring", 00:19:57.172 "raid_level": "raid5f", 00:19:57.172 "superblock": true, 00:19:57.172 "num_base_bdevs": 3, 00:19:57.172 "num_base_bdevs_discovered": 1, 00:19:57.172 "num_base_bdevs_operational": 3, 00:19:57.172 "base_bdevs_list": [ 00:19:57.172 { 00:19:57.172 "name": null, 00:19:57.172 "uuid": "192e2eb4-8a90-4068-8166-6eb06e6f9b89", 00:19:57.172 "is_configured": false, 00:19:57.172 "data_offset": 0, 00:19:57.172 "data_size": 63488 00:19:57.172 }, 00:19:57.172 { 00:19:57.172 "name": null, 00:19:57.172 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:57.172 "is_configured": false, 00:19:57.172 "data_offset": 0, 00:19:57.172 "data_size": 63488 00:19:57.172 }, 00:19:57.172 { 00:19:57.172 "name": "BaseBdev3", 00:19:57.172 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:57.172 "is_configured": true, 00:19:57.172 "data_offset": 2048, 00:19:57.172 "data_size": 63488 00:19:57.172 } 00:19:57.172 ] 00:19:57.172 }' 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.172 10:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.741 [2024-10-30 10:48:19.112781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.741 "name": "Existed_Raid", 00:19:57.741 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:57.741 "strip_size_kb": 64, 00:19:57.741 "state": "configuring", 00:19:57.741 "raid_level": "raid5f", 00:19:57.741 "superblock": true, 00:19:57.741 "num_base_bdevs": 3, 00:19:57.741 "num_base_bdevs_discovered": 2, 00:19:57.741 "num_base_bdevs_operational": 3, 00:19:57.741 "base_bdevs_list": [ 00:19:57.741 { 00:19:57.741 "name": null, 00:19:57.741 "uuid": "192e2eb4-8a90-4068-8166-6eb06e6f9b89", 00:19:57.741 "is_configured": false, 00:19:57.741 "data_offset": 0, 00:19:57.741 "data_size": 63488 00:19:57.741 }, 00:19:57.741 { 00:19:57.741 "name": "BaseBdev2", 00:19:57.741 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:57.741 "is_configured": true, 00:19:57.741 "data_offset": 2048, 00:19:57.741 "data_size": 63488 00:19:57.741 }, 00:19:57.741 { 00:19:57.741 "name": "BaseBdev3", 00:19:57.741 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:57.741 "is_configured": true, 00:19:57.741 "data_offset": 2048, 00:19:57.741 "data_size": 63488 00:19:57.741 } 00:19:57.741 ] 00:19:57.741 }' 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.741 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 192e2eb4-8a90-4068-8166-6eb06e6f9b89 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.315 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.583 [2024-10-30 10:48:19.787659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:58.583 [2024-10-30 10:48:19.787968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:58.583 [2024-10-30 10:48:19.788009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:58.583 NewBaseBdev 00:19:58.583 [2024-10-30 10:48:19.788329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.583 [2024-10-30 10:48:19.793302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:58.583 [2024-10-30 10:48:19.793331] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:58.583 [2024-10-30 10:48:19.793649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.583 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.583 [ 00:19:58.583 { 00:19:58.583 "name": "NewBaseBdev", 00:19:58.583 "aliases": [ 00:19:58.583 "192e2eb4-8a90-4068-8166-6eb06e6f9b89" 00:19:58.583 ], 00:19:58.583 "product_name": "Malloc disk", 00:19:58.583 "block_size": 512, 00:19:58.583 "num_blocks": 65536, 00:19:58.583 "uuid": "192e2eb4-8a90-4068-8166-6eb06e6f9b89", 00:19:58.583 "assigned_rate_limits": { 00:19:58.583 "rw_ios_per_sec": 0, 00:19:58.583 "rw_mbytes_per_sec": 0, 00:19:58.584 "r_mbytes_per_sec": 0, 00:19:58.584 "w_mbytes_per_sec": 0 00:19:58.584 }, 00:19:58.584 "claimed": true, 00:19:58.584 "claim_type": "exclusive_write", 00:19:58.584 "zoned": false, 00:19:58.584 "supported_io_types": { 00:19:58.584 "read": true, 00:19:58.584 "write": true, 00:19:58.584 "unmap": true, 00:19:58.584 "flush": true, 00:19:58.584 "reset": true, 00:19:58.584 "nvme_admin": false, 00:19:58.584 "nvme_io": false, 00:19:58.584 "nvme_io_md": false, 00:19:58.584 "write_zeroes": true, 00:19:58.584 "zcopy": true, 00:19:58.584 "get_zone_info": false, 00:19:58.584 "zone_management": false, 00:19:58.584 "zone_append": false, 00:19:58.584 "compare": false, 00:19:58.584 "compare_and_write": false, 00:19:58.584 "abort": true, 00:19:58.584 "seek_hole": false, 00:19:58.584 "seek_data": false, 00:19:58.584 "copy": true, 00:19:58.584 "nvme_iov_md": false 00:19:58.584 }, 00:19:58.584 "memory_domains": [ 00:19:58.584 { 00:19:58.584 "dma_device_id": "system", 00:19:58.584 "dma_device_type": 1 00:19:58.584 }, 00:19:58.584 { 00:19:58.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.584 "dma_device_type": 2 00:19:58.584 } 00:19:58.584 ], 00:19:58.584 "driver_specific": {} 00:19:58.584 } 00:19:58.584 ] 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.584 "name": "Existed_Raid", 00:19:58.584 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:58.584 "strip_size_kb": 64, 00:19:58.584 "state": "online", 00:19:58.584 "raid_level": "raid5f", 00:19:58.584 "superblock": true, 00:19:58.584 "num_base_bdevs": 3, 00:19:58.584 "num_base_bdevs_discovered": 3, 00:19:58.584 "num_base_bdevs_operational": 3, 00:19:58.584 "base_bdevs_list": [ 00:19:58.584 { 00:19:58.584 "name": "NewBaseBdev", 00:19:58.584 "uuid": "192e2eb4-8a90-4068-8166-6eb06e6f9b89", 00:19:58.584 "is_configured": true, 00:19:58.584 "data_offset": 2048, 00:19:58.584 "data_size": 63488 00:19:58.584 }, 00:19:58.584 { 00:19:58.584 "name": "BaseBdev2", 00:19:58.584 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:58.584 "is_configured": true, 00:19:58.584 "data_offset": 2048, 00:19:58.584 "data_size": 63488 00:19:58.584 }, 00:19:58.584 { 00:19:58.584 "name": "BaseBdev3", 00:19:58.584 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:58.584 "is_configured": true, 00:19:58.584 "data_offset": 2048, 00:19:58.584 "data_size": 63488 00:19:58.584 } 00:19:58.584 ] 00:19:58.584 }' 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.584 10:48:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.154 [2024-10-30 10:48:20.375878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:59.154 "name": "Existed_Raid", 00:19:59.154 "aliases": [ 00:19:59.154 "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc" 00:19:59.154 ], 00:19:59.154 "product_name": "Raid Volume", 00:19:59.154 "block_size": 512, 00:19:59.154 "num_blocks": 126976, 00:19:59.154 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:59.154 "assigned_rate_limits": { 00:19:59.154 "rw_ios_per_sec": 0, 00:19:59.154 "rw_mbytes_per_sec": 0, 00:19:59.154 "r_mbytes_per_sec": 0, 00:19:59.154 "w_mbytes_per_sec": 0 00:19:59.154 }, 00:19:59.154 "claimed": false, 00:19:59.154 "zoned": false, 00:19:59.154 "supported_io_types": { 00:19:59.154 "read": true, 00:19:59.154 "write": true, 00:19:59.154 "unmap": false, 00:19:59.154 "flush": false, 00:19:59.154 "reset": true, 00:19:59.154 "nvme_admin": false, 00:19:59.154 "nvme_io": false, 00:19:59.154 "nvme_io_md": false, 00:19:59.154 "write_zeroes": true, 00:19:59.154 "zcopy": false, 00:19:59.154 "get_zone_info": false, 00:19:59.154 "zone_management": false, 00:19:59.154 "zone_append": false, 00:19:59.154 "compare": false, 00:19:59.154 "compare_and_write": false, 00:19:59.154 "abort": false, 00:19:59.154 "seek_hole": false, 00:19:59.154 "seek_data": false, 00:19:59.154 "copy": false, 00:19:59.154 "nvme_iov_md": false 00:19:59.154 }, 00:19:59.154 "driver_specific": { 00:19:59.154 "raid": { 00:19:59.154 "uuid": "cd2d8e3d-1ddf-4b26-9ece-4afbb180dbdc", 00:19:59.154 "strip_size_kb": 64, 00:19:59.154 "state": "online", 00:19:59.154 "raid_level": "raid5f", 00:19:59.154 "superblock": true, 00:19:59.154 "num_base_bdevs": 3, 00:19:59.154 "num_base_bdevs_discovered": 3, 00:19:59.154 "num_base_bdevs_operational": 3, 00:19:59.154 "base_bdevs_list": [ 00:19:59.154 { 00:19:59.154 "name": "NewBaseBdev", 00:19:59.154 "uuid": "192e2eb4-8a90-4068-8166-6eb06e6f9b89", 00:19:59.154 "is_configured": true, 00:19:59.154 "data_offset": 2048, 00:19:59.154 "data_size": 63488 00:19:59.154 }, 00:19:59.154 { 00:19:59.154 "name": "BaseBdev2", 00:19:59.154 "uuid": "6c776d98-cea3-4897-a82d-0120424eb200", 00:19:59.154 "is_configured": true, 00:19:59.154 "data_offset": 2048, 00:19:59.154 "data_size": 63488 00:19:59.154 }, 00:19:59.154 { 00:19:59.154 "name": "BaseBdev3", 00:19:59.154 "uuid": "0288b021-bc30-4621-a26d-70c28c2dd2ab", 00:19:59.154 "is_configured": true, 00:19:59.154 "data_offset": 2048, 00:19:59.154 "data_size": 63488 00:19:59.154 } 00:19:59.154 ] 00:19:59.154 } 00:19:59.154 } 00:19:59.154 }' 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:59.154 BaseBdev2 00:19:59.154 BaseBdev3' 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.154 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.415 [2024-10-30 10:48:20.707744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:59.415 [2024-10-30 10:48:20.707781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:59.415 [2024-10-30 10:48:20.707889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.415 [2024-10-30 10:48:20.708325] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.415 [2024-10-30 10:48:20.708359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81004 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 81004 ']' 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 81004 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81004 00:19:59.415 killing process with pid 81004 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81004' 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 81004 00:19:59.415 [2024-10-30 10:48:20.748489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.415 10:48:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 81004 00:19:59.674 [2024-10-30 10:48:21.029603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.613 10:48:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:00.613 00:20:00.613 real 0m11.878s 00:20:00.613 user 0m19.724s 00:20:00.613 sys 0m1.663s 00:20:00.613 10:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:00.613 ************************************ 00:20:00.613 END TEST raid5f_state_function_test_sb 00:20:00.613 ************************************ 00:20:00.613 10:48:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.872 10:48:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:20:00.872 10:48:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:00.872 10:48:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:00.872 10:48:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.872 ************************************ 00:20:00.872 START TEST raid5f_superblock_test 00:20:00.872 ************************************ 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81650 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81650 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81650 ']' 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.872 10:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.872 [2024-10-30 10:48:22.247648] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:20:00.872 [2024-10-30 10:48:22.247838] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81650 ] 00:20:01.130 [2024-10-30 10:48:22.432478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.130 [2024-10-30 10:48:22.565274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.387 [2024-10-30 10:48:22.780071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.387 [2024-10-30 10:48:22.780160] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.956 malloc1 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.956 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.956 [2024-10-30 10:48:23.272109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:01.956 [2024-10-30 10:48:23.272180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.956 [2024-10-30 10:48:23.272215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:01.957 [2024-10-30 10:48:23.272231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.957 [2024-10-30 10:48:23.275433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.957 [2024-10-30 10:48:23.275475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:01.957 pt1 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 malloc2 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 [2024-10-30 10:48:23.328939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:01.957 [2024-10-30 10:48:23.329027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.957 [2024-10-30 10:48:23.329058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:01.957 [2024-10-30 10:48:23.329072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.957 [2024-10-30 10:48:23.331920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.957 [2024-10-30 10:48:23.331959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:01.957 pt2 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 malloc3 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 [2024-10-30 10:48:23.401296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:01.957 [2024-10-30 10:48:23.401371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.957 [2024-10-30 10:48:23.401403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:01.957 [2024-10-30 10:48:23.401419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.957 [2024-10-30 10:48:23.404308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.957 [2024-10-30 10:48:23.404363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:01.957 pt3 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.957 [2024-10-30 10:48:23.413348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:01.957 [2024-10-30 10:48:23.415879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:01.957 [2024-10-30 10:48:23.416012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:01.957 [2024-10-30 10:48:23.416271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:01.957 [2024-10-30 10:48:23.416300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:01.957 [2024-10-30 10:48:23.416623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:01.957 [2024-10-30 10:48:23.421918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:01.957 [2024-10-30 10:48:23.421959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:01.957 [2024-10-30 10:48:23.422245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.957 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.258 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.258 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.258 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.258 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.258 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.258 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.258 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.258 "name": "raid_bdev1", 00:20:02.258 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:02.258 "strip_size_kb": 64, 00:20:02.258 "state": "online", 00:20:02.258 "raid_level": "raid5f", 00:20:02.258 "superblock": true, 00:20:02.258 "num_base_bdevs": 3, 00:20:02.258 "num_base_bdevs_discovered": 3, 00:20:02.258 "num_base_bdevs_operational": 3, 00:20:02.258 "base_bdevs_list": [ 00:20:02.258 { 00:20:02.258 "name": "pt1", 00:20:02.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.258 "is_configured": true, 00:20:02.258 "data_offset": 2048, 00:20:02.258 "data_size": 63488 00:20:02.258 }, 00:20:02.258 { 00:20:02.258 "name": "pt2", 00:20:02.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.258 "is_configured": true, 00:20:02.258 "data_offset": 2048, 00:20:02.258 "data_size": 63488 00:20:02.258 }, 00:20:02.258 { 00:20:02.258 "name": "pt3", 00:20:02.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:02.258 "is_configured": true, 00:20:02.258 "data_offset": 2048, 00:20:02.259 "data_size": 63488 00:20:02.259 } 00:20:02.259 ] 00:20:02.259 }' 00:20:02.259 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.259 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:02.552 [2024-10-30 10:48:23.964566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:02.552 10:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.552 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:02.552 "name": "raid_bdev1", 00:20:02.552 "aliases": [ 00:20:02.552 "4acc90b1-0efc-4dce-b6de-2a0f66171d9f" 00:20:02.552 ], 00:20:02.552 "product_name": "Raid Volume", 00:20:02.552 "block_size": 512, 00:20:02.552 "num_blocks": 126976, 00:20:02.552 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:02.552 "assigned_rate_limits": { 00:20:02.552 "rw_ios_per_sec": 0, 00:20:02.552 "rw_mbytes_per_sec": 0, 00:20:02.552 "r_mbytes_per_sec": 0, 00:20:02.552 "w_mbytes_per_sec": 0 00:20:02.552 }, 00:20:02.552 "claimed": false, 00:20:02.552 "zoned": false, 00:20:02.552 "supported_io_types": { 00:20:02.552 "read": true, 00:20:02.552 "write": true, 00:20:02.552 "unmap": false, 00:20:02.552 "flush": false, 00:20:02.552 "reset": true, 00:20:02.552 "nvme_admin": false, 00:20:02.552 "nvme_io": false, 00:20:02.552 "nvme_io_md": false, 00:20:02.552 "write_zeroes": true, 00:20:02.552 "zcopy": false, 00:20:02.552 "get_zone_info": false, 00:20:02.552 "zone_management": false, 00:20:02.552 "zone_append": false, 00:20:02.552 "compare": false, 00:20:02.552 "compare_and_write": false, 00:20:02.552 "abort": false, 00:20:02.552 "seek_hole": false, 00:20:02.552 "seek_data": false, 00:20:02.552 "copy": false, 00:20:02.552 "nvme_iov_md": false 00:20:02.552 }, 00:20:02.552 "driver_specific": { 00:20:02.552 "raid": { 00:20:02.552 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:02.552 "strip_size_kb": 64, 00:20:02.552 "state": "online", 00:20:02.552 "raid_level": "raid5f", 00:20:02.552 "superblock": true, 00:20:02.552 "num_base_bdevs": 3, 00:20:02.552 "num_base_bdevs_discovered": 3, 00:20:02.552 "num_base_bdevs_operational": 3, 00:20:02.552 "base_bdevs_list": [ 00:20:02.552 { 00:20:02.552 "name": "pt1", 00:20:02.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:02.552 "is_configured": true, 00:20:02.552 "data_offset": 2048, 00:20:02.552 "data_size": 63488 00:20:02.553 }, 00:20:02.553 { 00:20:02.553 "name": "pt2", 00:20:02.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.553 "is_configured": true, 00:20:02.553 "data_offset": 2048, 00:20:02.553 "data_size": 63488 00:20:02.553 }, 00:20:02.553 { 00:20:02.553 "name": "pt3", 00:20:02.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:02.553 "is_configured": true, 00:20:02.553 "data_offset": 2048, 00:20:02.553 "data_size": 63488 00:20:02.553 } 00:20:02.553 ] 00:20:02.553 } 00:20:02.553 } 00:20:02.553 }' 00:20:02.553 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:02.813 pt2 00:20:02.813 pt3' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.813 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:02.813 [2024-10-30 10:48:24.272522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.072 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.072 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4acc90b1-0efc-4dce-b6de-2a0f66171d9f 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4acc90b1-0efc-4dce-b6de-2a0f66171d9f ']' 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 [2024-10-30 10:48:24.324325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.073 [2024-10-30 10:48:24.324358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.073 [2024-10-30 10:48:24.324440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.073 [2024-10-30 10:48:24.324534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.073 [2024-10-30 10:48:24.324550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 [2024-10-30 10:48:24.468430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:03.073 [2024-10-30 10:48:24.471098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:03.073 [2024-10-30 10:48:24.471172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:03.073 [2024-10-30 10:48:24.471260] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:03.073 [2024-10-30 10:48:24.471340] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:03.073 [2024-10-30 10:48:24.471374] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:03.073 [2024-10-30 10:48:24.471401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.073 [2024-10-30 10:48:24.471415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:03.073 request: 00:20:03.073 { 00:20:03.073 "name": "raid_bdev1", 00:20:03.073 "raid_level": "raid5f", 00:20:03.073 "base_bdevs": [ 00:20:03.073 "malloc1", 00:20:03.073 "malloc2", 00:20:03.073 "malloc3" 00:20:03.073 ], 00:20:03.073 "strip_size_kb": 64, 00:20:03.073 "superblock": false, 00:20:03.073 "method": "bdev_raid_create", 00:20:03.073 "req_id": 1 00:20:03.073 } 00:20:03.073 Got JSON-RPC error response 00:20:03.073 response: 00:20:03.073 { 00:20:03.073 "code": -17, 00:20:03.073 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:03.073 } 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.073 [2024-10-30 10:48:24.532382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:03.073 [2024-10-30 10:48:24.532606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.073 [2024-10-30 10:48:24.532665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:03.073 [2024-10-30 10:48:24.532682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.073 [2024-10-30 10:48:24.535722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.073 [2024-10-30 10:48:24.535875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:03.073 [2024-10-30 10:48:24.536004] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:03.073 [2024-10-30 10:48:24.536078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:03.073 pt1 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.073 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.333 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.333 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.333 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.333 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.333 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.333 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.333 "name": "raid_bdev1", 00:20:03.333 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:03.333 "strip_size_kb": 64, 00:20:03.333 "state": "configuring", 00:20:03.333 "raid_level": "raid5f", 00:20:03.333 "superblock": true, 00:20:03.333 "num_base_bdevs": 3, 00:20:03.333 "num_base_bdevs_discovered": 1, 00:20:03.333 "num_base_bdevs_operational": 3, 00:20:03.333 "base_bdevs_list": [ 00:20:03.333 { 00:20:03.333 "name": "pt1", 00:20:03.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.333 "is_configured": true, 00:20:03.333 "data_offset": 2048, 00:20:03.333 "data_size": 63488 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "name": null, 00:20:03.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.333 "is_configured": false, 00:20:03.333 "data_offset": 2048, 00:20:03.333 "data_size": 63488 00:20:03.333 }, 00:20:03.333 { 00:20:03.333 "name": null, 00:20:03.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:03.333 "is_configured": false, 00:20:03.333 "data_offset": 2048, 00:20:03.333 "data_size": 63488 00:20:03.333 } 00:20:03.333 ] 00:20:03.333 }' 00:20:03.333 10:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.333 10:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:20:03.592 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:03.592 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.592 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 [2024-10-30 10:48:25.052585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:03.592 [2024-10-30 10:48:25.052672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.592 [2024-10-30 10:48:25.052706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:03.592 [2024-10-30 10:48:25.052722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.592 [2024-10-30 10:48:25.053306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.592 [2024-10-30 10:48:25.053345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:03.592 [2024-10-30 10:48:25.053474] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:03.592 [2024-10-30 10:48:25.053520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:03.592 pt2 00:20:03.592 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.592 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:03.592 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.592 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.592 [2024-10-30 10:48:25.060598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:03.851 "name": "raid_bdev1", 00:20:03.851 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:03.851 "strip_size_kb": 64, 00:20:03.851 "state": "configuring", 00:20:03.851 "raid_level": "raid5f", 00:20:03.851 "superblock": true, 00:20:03.851 "num_base_bdevs": 3, 00:20:03.851 "num_base_bdevs_discovered": 1, 00:20:03.851 "num_base_bdevs_operational": 3, 00:20:03.851 "base_bdevs_list": [ 00:20:03.851 { 00:20:03.851 "name": "pt1", 00:20:03.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:03.851 "is_configured": true, 00:20:03.851 "data_offset": 2048, 00:20:03.851 "data_size": 63488 00:20:03.851 }, 00:20:03.851 { 00:20:03.851 "name": null, 00:20:03.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:03.851 "is_configured": false, 00:20:03.851 "data_offset": 0, 00:20:03.851 "data_size": 63488 00:20:03.851 }, 00:20:03.851 { 00:20:03.851 "name": null, 00:20:03.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:03.851 "is_configured": false, 00:20:03.851 "data_offset": 2048, 00:20:03.851 "data_size": 63488 00:20:03.851 } 00:20:03.851 ] 00:20:03.851 }' 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:03.851 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.418 [2024-10-30 10:48:25.588720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:04.418 [2024-10-30 10:48:25.588810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.418 [2024-10-30 10:48:25.588837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:04.418 [2024-10-30 10:48:25.588855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.418 [2024-10-30 10:48:25.589501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.418 [2024-10-30 10:48:25.589532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:04.418 [2024-10-30 10:48:25.589629] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:04.418 [2024-10-30 10:48:25.589664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:04.418 pt2 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.418 [2024-10-30 10:48:25.596699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:04.418 [2024-10-30 10:48:25.596767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.418 [2024-10-30 10:48:25.596788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:04.418 [2024-10-30 10:48:25.596802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.418 [2024-10-30 10:48:25.597277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.418 [2024-10-30 10:48:25.597316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:04.418 [2024-10-30 10:48:25.597390] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:04.418 [2024-10-30 10:48:25.597421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:04.418 [2024-10-30 10:48:25.597614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:04.418 [2024-10-30 10:48:25.597641] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:04.418 [2024-10-30 10:48:25.597933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:04.418 [2024-10-30 10:48:25.603230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:04.418 [2024-10-30 10:48:25.603255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:04.418 [2024-10-30 10:48:25.603493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.418 pt3 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.418 "name": "raid_bdev1", 00:20:04.418 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:04.418 "strip_size_kb": 64, 00:20:04.418 "state": "online", 00:20:04.418 "raid_level": "raid5f", 00:20:04.418 "superblock": true, 00:20:04.418 "num_base_bdevs": 3, 00:20:04.418 "num_base_bdevs_discovered": 3, 00:20:04.418 "num_base_bdevs_operational": 3, 00:20:04.418 "base_bdevs_list": [ 00:20:04.418 { 00:20:04.418 "name": "pt1", 00:20:04.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.418 "is_configured": true, 00:20:04.418 "data_offset": 2048, 00:20:04.418 "data_size": 63488 00:20:04.418 }, 00:20:04.418 { 00:20:04.418 "name": "pt2", 00:20:04.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.418 "is_configured": true, 00:20:04.418 "data_offset": 2048, 00:20:04.418 "data_size": 63488 00:20:04.418 }, 00:20:04.418 { 00:20:04.418 "name": "pt3", 00:20:04.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:04.418 "is_configured": true, 00:20:04.418 "data_offset": 2048, 00:20:04.418 "data_size": 63488 00:20:04.418 } 00:20:04.418 ] 00:20:04.418 }' 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.418 10:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.677 [2024-10-30 10:48:26.125677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.677 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.935 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:04.935 "name": "raid_bdev1", 00:20:04.935 "aliases": [ 00:20:04.935 "4acc90b1-0efc-4dce-b6de-2a0f66171d9f" 00:20:04.935 ], 00:20:04.935 "product_name": "Raid Volume", 00:20:04.935 "block_size": 512, 00:20:04.935 "num_blocks": 126976, 00:20:04.935 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:04.935 "assigned_rate_limits": { 00:20:04.935 "rw_ios_per_sec": 0, 00:20:04.935 "rw_mbytes_per_sec": 0, 00:20:04.935 "r_mbytes_per_sec": 0, 00:20:04.935 "w_mbytes_per_sec": 0 00:20:04.935 }, 00:20:04.935 "claimed": false, 00:20:04.935 "zoned": false, 00:20:04.935 "supported_io_types": { 00:20:04.935 "read": true, 00:20:04.935 "write": true, 00:20:04.935 "unmap": false, 00:20:04.935 "flush": false, 00:20:04.935 "reset": true, 00:20:04.935 "nvme_admin": false, 00:20:04.935 "nvme_io": false, 00:20:04.935 "nvme_io_md": false, 00:20:04.935 "write_zeroes": true, 00:20:04.935 "zcopy": false, 00:20:04.935 "get_zone_info": false, 00:20:04.935 "zone_management": false, 00:20:04.935 "zone_append": false, 00:20:04.935 "compare": false, 00:20:04.935 "compare_and_write": false, 00:20:04.935 "abort": false, 00:20:04.935 "seek_hole": false, 00:20:04.935 "seek_data": false, 00:20:04.935 "copy": false, 00:20:04.935 "nvme_iov_md": false 00:20:04.935 }, 00:20:04.935 "driver_specific": { 00:20:04.935 "raid": { 00:20:04.935 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:04.935 "strip_size_kb": 64, 00:20:04.935 "state": "online", 00:20:04.936 "raid_level": "raid5f", 00:20:04.936 "superblock": true, 00:20:04.936 "num_base_bdevs": 3, 00:20:04.936 "num_base_bdevs_discovered": 3, 00:20:04.936 "num_base_bdevs_operational": 3, 00:20:04.936 "base_bdevs_list": [ 00:20:04.936 { 00:20:04.936 "name": "pt1", 00:20:04.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:04.936 "is_configured": true, 00:20:04.936 "data_offset": 2048, 00:20:04.936 "data_size": 63488 00:20:04.936 }, 00:20:04.936 { 00:20:04.936 "name": "pt2", 00:20:04.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:04.936 "is_configured": true, 00:20:04.936 "data_offset": 2048, 00:20:04.936 "data_size": 63488 00:20:04.936 }, 00:20:04.936 { 00:20:04.936 "name": "pt3", 00:20:04.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:04.936 "is_configured": true, 00:20:04.936 "data_offset": 2048, 00:20:04.936 "data_size": 63488 00:20:04.936 } 00:20:04.936 ] 00:20:04.936 } 00:20:04.936 } 00:20:04.936 }' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:04.936 pt2 00:20:04.936 pt3' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:04.936 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.194 [2024-10-30 10:48:26.441718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4acc90b1-0efc-4dce-b6de-2a0f66171d9f '!=' 4acc90b1-0efc-4dce-b6de-2a0f66171d9f ']' 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.194 [2024-10-30 10:48:26.489577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.194 "name": "raid_bdev1", 00:20:05.194 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:05.194 "strip_size_kb": 64, 00:20:05.194 "state": "online", 00:20:05.194 "raid_level": "raid5f", 00:20:05.194 "superblock": true, 00:20:05.194 "num_base_bdevs": 3, 00:20:05.194 "num_base_bdevs_discovered": 2, 00:20:05.194 "num_base_bdevs_operational": 2, 00:20:05.194 "base_bdevs_list": [ 00:20:05.194 { 00:20:05.194 "name": null, 00:20:05.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.194 "is_configured": false, 00:20:05.194 "data_offset": 0, 00:20:05.194 "data_size": 63488 00:20:05.194 }, 00:20:05.194 { 00:20:05.194 "name": "pt2", 00:20:05.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.194 "is_configured": true, 00:20:05.194 "data_offset": 2048, 00:20:05.194 "data_size": 63488 00:20:05.194 }, 00:20:05.194 { 00:20:05.194 "name": "pt3", 00:20:05.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:05.194 "is_configured": true, 00:20:05.194 "data_offset": 2048, 00:20:05.194 "data_size": 63488 00:20:05.194 } 00:20:05.194 ] 00:20:05.194 }' 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.194 10:48:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.762 [2024-10-30 10:48:27.053744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.762 [2024-10-30 10:48:27.053791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.762 [2024-10-30 10:48:27.053922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.762 [2024-10-30 10:48:27.054213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.762 [2024-10-30 10:48:27.054444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.762 [2024-10-30 10:48:27.133717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:05.762 [2024-10-30 10:48:27.133805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.762 [2024-10-30 10:48:27.133832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:05.762 [2024-10-30 10:48:27.133850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.762 [2024-10-30 10:48:27.136981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.762 [2024-10-30 10:48:27.137051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:05.762 [2024-10-30 10:48:27.137187] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:05.762 [2024-10-30 10:48:27.137253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:05.762 pt2 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.762 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.762 "name": "raid_bdev1", 00:20:05.762 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:05.762 "strip_size_kb": 64, 00:20:05.762 "state": "configuring", 00:20:05.762 "raid_level": "raid5f", 00:20:05.762 "superblock": true, 00:20:05.762 "num_base_bdevs": 3, 00:20:05.762 "num_base_bdevs_discovered": 1, 00:20:05.762 "num_base_bdevs_operational": 2, 00:20:05.762 "base_bdevs_list": [ 00:20:05.762 { 00:20:05.762 "name": null, 00:20:05.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.762 "is_configured": false, 00:20:05.762 "data_offset": 2048, 00:20:05.762 "data_size": 63488 00:20:05.762 }, 00:20:05.762 { 00:20:05.762 "name": "pt2", 00:20:05.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.762 "is_configured": true, 00:20:05.762 "data_offset": 2048, 00:20:05.763 "data_size": 63488 00:20:05.763 }, 00:20:05.763 { 00:20:05.763 "name": null, 00:20:05.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:05.763 "is_configured": false, 00:20:05.763 "data_offset": 2048, 00:20:05.763 "data_size": 63488 00:20:05.763 } 00:20:05.763 ] 00:20:05.763 }' 00:20:05.763 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.763 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.331 [2024-10-30 10:48:27.677867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:06.331 [2024-10-30 10:48:27.677957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.331 [2024-10-30 10:48:27.678033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:06.331 [2024-10-30 10:48:27.678055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.331 [2024-10-30 10:48:27.678626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.331 [2024-10-30 10:48:27.678655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:06.331 [2024-10-30 10:48:27.678746] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:06.331 [2024-10-30 10:48:27.678789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:06.331 [2024-10-30 10:48:27.678922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:06.331 [2024-10-30 10:48:27.678941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:06.331 [2024-10-30 10:48:27.679324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:06.331 [2024-10-30 10:48:27.684258] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:06.331 pt3 00:20:06.331 [2024-10-30 10:48:27.684484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:06.331 [2024-10-30 10:48:27.684832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.331 "name": "raid_bdev1", 00:20:06.331 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:06.331 "strip_size_kb": 64, 00:20:06.331 "state": "online", 00:20:06.331 "raid_level": "raid5f", 00:20:06.331 "superblock": true, 00:20:06.331 "num_base_bdevs": 3, 00:20:06.331 "num_base_bdevs_discovered": 2, 00:20:06.331 "num_base_bdevs_operational": 2, 00:20:06.331 "base_bdevs_list": [ 00:20:06.331 { 00:20:06.331 "name": null, 00:20:06.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.331 "is_configured": false, 00:20:06.331 "data_offset": 2048, 00:20:06.331 "data_size": 63488 00:20:06.331 }, 00:20:06.331 { 00:20:06.331 "name": "pt2", 00:20:06.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.331 "is_configured": true, 00:20:06.331 "data_offset": 2048, 00:20:06.331 "data_size": 63488 00:20:06.331 }, 00:20:06.331 { 00:20:06.331 "name": "pt3", 00:20:06.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:06.331 "is_configured": true, 00:20:06.331 "data_offset": 2048, 00:20:06.331 "data_size": 63488 00:20:06.331 } 00:20:06.331 ] 00:20:06.331 }' 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.331 10:48:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.897 [2024-10-30 10:48:28.218748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.897 [2024-10-30 10:48:28.218786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.897 [2024-10-30 10:48:28.218918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.897 [2024-10-30 10:48:28.219042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.897 [2024-10-30 10:48:28.219058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.897 [2024-10-30 10:48:28.290820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:06.897 [2024-10-30 10:48:28.290899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.897 [2024-10-30 10:48:28.290928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:06.897 [2024-10-30 10:48:28.290943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.897 [2024-10-30 10:48:28.293857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.897 [2024-10-30 10:48:28.293900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:06.897 [2024-10-30 10:48:28.294050] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:06.897 [2024-10-30 10:48:28.294111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:06.897 [2024-10-30 10:48:28.294275] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:06.897 [2024-10-30 10:48:28.294293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.897 [2024-10-30 10:48:28.294315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:06.897 [2024-10-30 10:48:28.294402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:06.897 pt1 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.897 "name": "raid_bdev1", 00:20:06.897 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:06.897 "strip_size_kb": 64, 00:20:06.897 "state": "configuring", 00:20:06.897 "raid_level": "raid5f", 00:20:06.897 "superblock": true, 00:20:06.897 "num_base_bdevs": 3, 00:20:06.897 "num_base_bdevs_discovered": 1, 00:20:06.897 "num_base_bdevs_operational": 2, 00:20:06.897 "base_bdevs_list": [ 00:20:06.897 { 00:20:06.897 "name": null, 00:20:06.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.897 "is_configured": false, 00:20:06.897 "data_offset": 2048, 00:20:06.897 "data_size": 63488 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "name": "pt2", 00:20:06.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.897 "is_configured": true, 00:20:06.897 "data_offset": 2048, 00:20:06.897 "data_size": 63488 00:20:06.897 }, 00:20:06.897 { 00:20:06.897 "name": null, 00:20:06.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:06.897 "is_configured": false, 00:20:06.897 "data_offset": 2048, 00:20:06.897 "data_size": 63488 00:20:06.897 } 00:20:06.897 ] 00:20:06.897 }' 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.897 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.480 [2024-10-30 10:48:28.870941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:07.480 [2024-10-30 10:48:28.871053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.480 [2024-10-30 10:48:28.871085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:07.480 [2024-10-30 10:48:28.871100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.480 [2024-10-30 10:48:28.871753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.480 [2024-10-30 10:48:28.871797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:07.480 [2024-10-30 10:48:28.871896] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:07.480 [2024-10-30 10:48:28.871928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:07.480 [2024-10-30 10:48:28.872128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:07.480 [2024-10-30 10:48:28.872154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:07.480 [2024-10-30 10:48:28.872463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:07.480 [2024-10-30 10:48:28.877310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:07.480 [2024-10-30 10:48:28.877374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:07.480 pt3 00:20:07.480 [2024-10-30 10:48:28.877734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.480 "name": "raid_bdev1", 00:20:07.480 "uuid": "4acc90b1-0efc-4dce-b6de-2a0f66171d9f", 00:20:07.480 "strip_size_kb": 64, 00:20:07.480 "state": "online", 00:20:07.480 "raid_level": "raid5f", 00:20:07.480 "superblock": true, 00:20:07.480 "num_base_bdevs": 3, 00:20:07.480 "num_base_bdevs_discovered": 2, 00:20:07.480 "num_base_bdevs_operational": 2, 00:20:07.480 "base_bdevs_list": [ 00:20:07.480 { 00:20:07.480 "name": null, 00:20:07.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.480 "is_configured": false, 00:20:07.480 "data_offset": 2048, 00:20:07.480 "data_size": 63488 00:20:07.480 }, 00:20:07.480 { 00:20:07.480 "name": "pt2", 00:20:07.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.480 "is_configured": true, 00:20:07.480 "data_offset": 2048, 00:20:07.480 "data_size": 63488 00:20:07.480 }, 00:20:07.480 { 00:20:07.480 "name": "pt3", 00:20:07.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:07.480 "is_configured": true, 00:20:07.480 "data_offset": 2048, 00:20:07.480 "data_size": 63488 00:20:07.480 } 00:20:07.480 ] 00:20:07.480 }' 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.480 10:48:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.047 [2024-10-30 10:48:29.455939] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4acc90b1-0efc-4dce-b6de-2a0f66171d9f '!=' 4acc90b1-0efc-4dce-b6de-2a0f66171d9f ']' 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81650 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81650 ']' 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81650 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:08.047 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81650 00:20:08.307 killing process with pid 81650 00:20:08.307 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:08.307 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:08.307 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81650' 00:20:08.307 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81650 00:20:08.307 [2024-10-30 10:48:29.527879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:08.307 [2024-10-30 10:48:29.527980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.307 10:48:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81650 00:20:08.307 [2024-10-30 10:48:29.528075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.307 [2024-10-30 10:48:29.528097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:08.564 [2024-10-30 10:48:29.793671] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:09.499 10:48:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:09.499 00:20:09.499 real 0m8.725s 00:20:09.499 user 0m14.308s 00:20:09.499 sys 0m1.222s 00:20:09.499 10:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:09.499 10:48:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.499 ************************************ 00:20:09.499 END TEST raid5f_superblock_test 00:20:09.499 ************************************ 00:20:09.499 10:48:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:09.499 10:48:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:20:09.499 10:48:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:09.499 10:48:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:09.499 10:48:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:09.499 ************************************ 00:20:09.499 START TEST raid5f_rebuild_test 00:20:09.499 ************************************ 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:09.499 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82095 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82095 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 82095 ']' 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.500 10:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.758 [2024-10-30 10:48:31.025603] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:20:09.758 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:09.758 Zero copy mechanism will not be used. 00:20:09.758 [2024-10-30 10:48:31.025799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82095 ] 00:20:09.758 [2024-10-30 10:48:31.203742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.017 [2024-10-30 10:48:31.328979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.275 [2024-10-30 10:48:31.534010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.275 [2024-10-30 10:48:31.534087] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.533 10:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.533 10:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:20:10.533 10:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:10.533 10:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:10.533 10:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.533 10:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.792 BaseBdev1_malloc 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 [2024-10-30 10:48:32.043892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:10.793 [2024-10-30 10:48:32.044044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.793 [2024-10-30 10:48:32.044077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:10.793 [2024-10-30 10:48:32.044096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.793 [2024-10-30 10:48:32.046848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.793 [2024-10-30 10:48:32.046918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:10.793 BaseBdev1 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 BaseBdev2_malloc 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 [2024-10-30 10:48:32.096255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:10.793 [2024-10-30 10:48:32.096324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.793 [2024-10-30 10:48:32.096352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:10.793 [2024-10-30 10:48:32.096372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.793 [2024-10-30 10:48:32.099228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.793 [2024-10-30 10:48:32.099272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:10.793 BaseBdev2 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 BaseBdev3_malloc 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 [2024-10-30 10:48:32.160470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:10.793 [2024-10-30 10:48:32.160585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.793 [2024-10-30 10:48:32.160616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:10.793 [2024-10-30 10:48:32.160636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.793 [2024-10-30 10:48:32.163564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.793 [2024-10-30 10:48:32.163618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:10.793 BaseBdev3 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 spare_malloc 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 spare_delay 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 [2024-10-30 10:48:32.221870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:10.793 [2024-10-30 10:48:32.221935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.793 [2024-10-30 10:48:32.221961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:10.793 [2024-10-30 10:48:32.221994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.793 [2024-10-30 10:48:32.224840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.793 [2024-10-30 10:48:32.224894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:10.793 spare 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 [2024-10-30 10:48:32.229951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.793 [2024-10-30 10:48:32.232432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.793 [2024-10-30 10:48:32.232558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:10.793 [2024-10-30 10:48:32.232676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:10.793 [2024-10-30 10:48:32.232694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:10.793 [2024-10-30 10:48:32.233039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:10.793 [2024-10-30 10:48:32.238357] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:10.793 [2024-10-30 10:48:32.238408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:10.793 [2024-10-30 10:48:32.238665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.052 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.053 "name": "raid_bdev1", 00:20:11.053 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:11.053 "strip_size_kb": 64, 00:20:11.053 "state": "online", 00:20:11.053 "raid_level": "raid5f", 00:20:11.053 "superblock": false, 00:20:11.053 "num_base_bdevs": 3, 00:20:11.053 "num_base_bdevs_discovered": 3, 00:20:11.053 "num_base_bdevs_operational": 3, 00:20:11.053 "base_bdevs_list": [ 00:20:11.053 { 00:20:11.053 "name": "BaseBdev1", 00:20:11.053 "uuid": "45503339-023e-51eb-9e49-99156563d7b2", 00:20:11.053 "is_configured": true, 00:20:11.053 "data_offset": 0, 00:20:11.053 "data_size": 65536 00:20:11.053 }, 00:20:11.053 { 00:20:11.053 "name": "BaseBdev2", 00:20:11.053 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:11.053 "is_configured": true, 00:20:11.053 "data_offset": 0, 00:20:11.053 "data_size": 65536 00:20:11.053 }, 00:20:11.053 { 00:20:11.053 "name": "BaseBdev3", 00:20:11.053 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:11.053 "is_configured": true, 00:20:11.053 "data_offset": 0, 00:20:11.053 "data_size": 65536 00:20:11.053 } 00:20:11.053 ] 00:20:11.053 }' 00:20:11.053 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.053 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.311 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:11.311 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.311 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.311 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:11.311 [2024-10-30 10:48:32.768660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:11.569 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:11.570 10:48:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:11.828 [2024-10-30 10:48:33.160582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:11.828 /dev/nbd0 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:11.828 1+0 records in 00:20:11.828 1+0 records out 00:20:11.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317361 s, 12.9 MB/s 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:11.828 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:20:12.395 512+0 records in 00:20:12.395 512+0 records out 00:20:12.395 67108864 bytes (67 MB, 64 MiB) copied, 0.480238 s, 140 MB/s 00:20:12.395 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:12.395 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:12.395 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:12.395 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.395 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:12.395 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.395 10:48:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:12.673 [2024-10-30 10:48:34.043444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.673 [2024-10-30 10:48:34.073376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.673 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.979 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.979 "name": "raid_bdev1", 00:20:12.979 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:12.979 "strip_size_kb": 64, 00:20:12.979 "state": "online", 00:20:12.979 "raid_level": "raid5f", 00:20:12.979 "superblock": false, 00:20:12.979 "num_base_bdevs": 3, 00:20:12.979 "num_base_bdevs_discovered": 2, 00:20:12.979 "num_base_bdevs_operational": 2, 00:20:12.979 "base_bdevs_list": [ 00:20:12.979 { 00:20:12.979 "name": null, 00:20:12.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.979 "is_configured": false, 00:20:12.979 "data_offset": 0, 00:20:12.979 "data_size": 65536 00:20:12.979 }, 00:20:12.979 { 00:20:12.979 "name": "BaseBdev2", 00:20:12.979 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:12.979 "is_configured": true, 00:20:12.979 "data_offset": 0, 00:20:12.979 "data_size": 65536 00:20:12.979 }, 00:20:12.979 { 00:20:12.979 "name": "BaseBdev3", 00:20:12.979 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:12.979 "is_configured": true, 00:20:12.979 "data_offset": 0, 00:20:12.979 "data_size": 65536 00:20:12.979 } 00:20:12.979 ] 00:20:12.979 }' 00:20:12.979 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.979 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.238 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:13.238 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.238 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.238 [2024-10-30 10:48:34.581562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:13.238 [2024-10-30 10:48:34.598312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:20:13.238 10:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.238 10:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:13.238 [2024-10-30 10:48:34.606255] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.173 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.431 "name": "raid_bdev1", 00:20:14.431 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:14.431 "strip_size_kb": 64, 00:20:14.431 "state": "online", 00:20:14.431 "raid_level": "raid5f", 00:20:14.431 "superblock": false, 00:20:14.431 "num_base_bdevs": 3, 00:20:14.431 "num_base_bdevs_discovered": 3, 00:20:14.431 "num_base_bdevs_operational": 3, 00:20:14.431 "process": { 00:20:14.431 "type": "rebuild", 00:20:14.431 "target": "spare", 00:20:14.431 "progress": { 00:20:14.431 "blocks": 18432, 00:20:14.431 "percent": 14 00:20:14.431 } 00:20:14.431 }, 00:20:14.431 "base_bdevs_list": [ 00:20:14.431 { 00:20:14.431 "name": "spare", 00:20:14.431 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:14.431 "is_configured": true, 00:20:14.431 "data_offset": 0, 00:20:14.431 "data_size": 65536 00:20:14.431 }, 00:20:14.431 { 00:20:14.431 "name": "BaseBdev2", 00:20:14.431 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:14.431 "is_configured": true, 00:20:14.431 "data_offset": 0, 00:20:14.431 "data_size": 65536 00:20:14.431 }, 00:20:14.431 { 00:20:14.431 "name": "BaseBdev3", 00:20:14.431 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:14.431 "is_configured": true, 00:20:14.431 "data_offset": 0, 00:20:14.431 "data_size": 65536 00:20:14.431 } 00:20:14.431 ] 00:20:14.431 }' 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.431 [2024-10-30 10:48:35.763575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:14.431 [2024-10-30 10:48:35.819905] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:14.431 [2024-10-30 10:48:35.820189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.431 [2024-10-30 10:48:35.820372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:14.431 [2024-10-30 10:48:35.820508] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.431 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.432 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.689 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.689 "name": "raid_bdev1", 00:20:14.689 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:14.689 "strip_size_kb": 64, 00:20:14.689 "state": "online", 00:20:14.689 "raid_level": "raid5f", 00:20:14.689 "superblock": false, 00:20:14.689 "num_base_bdevs": 3, 00:20:14.689 "num_base_bdevs_discovered": 2, 00:20:14.689 "num_base_bdevs_operational": 2, 00:20:14.689 "base_bdevs_list": [ 00:20:14.689 { 00:20:14.689 "name": null, 00:20:14.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.689 "is_configured": false, 00:20:14.689 "data_offset": 0, 00:20:14.689 "data_size": 65536 00:20:14.689 }, 00:20:14.689 { 00:20:14.689 "name": "BaseBdev2", 00:20:14.689 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:14.689 "is_configured": true, 00:20:14.689 "data_offset": 0, 00:20:14.689 "data_size": 65536 00:20:14.689 }, 00:20:14.689 { 00:20:14.689 "name": "BaseBdev3", 00:20:14.689 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:14.689 "is_configured": true, 00:20:14.689 "data_offset": 0, 00:20:14.689 "data_size": 65536 00:20:14.689 } 00:20:14.689 ] 00:20:14.689 }' 00:20:14.689 10:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.689 10:48:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.946 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.947 "name": "raid_bdev1", 00:20:14.947 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:14.947 "strip_size_kb": 64, 00:20:14.947 "state": "online", 00:20:14.947 "raid_level": "raid5f", 00:20:14.947 "superblock": false, 00:20:14.947 "num_base_bdevs": 3, 00:20:14.947 "num_base_bdevs_discovered": 2, 00:20:14.947 "num_base_bdevs_operational": 2, 00:20:14.947 "base_bdevs_list": [ 00:20:14.947 { 00:20:14.947 "name": null, 00:20:14.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.947 "is_configured": false, 00:20:14.947 "data_offset": 0, 00:20:14.947 "data_size": 65536 00:20:14.947 }, 00:20:14.947 { 00:20:14.947 "name": "BaseBdev2", 00:20:14.947 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:14.947 "is_configured": true, 00:20:14.947 "data_offset": 0, 00:20:14.947 "data_size": 65536 00:20:14.947 }, 00:20:14.947 { 00:20:14.947 "name": "BaseBdev3", 00:20:14.947 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:14.947 "is_configured": true, 00:20:14.947 "data_offset": 0, 00:20:14.947 "data_size": 65536 00:20:14.947 } 00:20:14.947 ] 00:20:14.947 }' 00:20:15.204 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.204 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:15.204 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.204 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:15.204 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.204 10:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.204 10:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.204 [2024-10-30 10:48:36.527422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.204 [2024-10-30 10:48:36.543011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:15.204 10:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.204 10:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:15.204 [2024-10-30 10:48:36.550435] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.138 "name": "raid_bdev1", 00:20:16.138 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:16.138 "strip_size_kb": 64, 00:20:16.138 "state": "online", 00:20:16.138 "raid_level": "raid5f", 00:20:16.138 "superblock": false, 00:20:16.138 "num_base_bdevs": 3, 00:20:16.138 "num_base_bdevs_discovered": 3, 00:20:16.138 "num_base_bdevs_operational": 3, 00:20:16.138 "process": { 00:20:16.138 "type": "rebuild", 00:20:16.138 "target": "spare", 00:20:16.138 "progress": { 00:20:16.138 "blocks": 18432, 00:20:16.138 "percent": 14 00:20:16.138 } 00:20:16.138 }, 00:20:16.138 "base_bdevs_list": [ 00:20:16.138 { 00:20:16.138 "name": "spare", 00:20:16.138 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:16.138 "is_configured": true, 00:20:16.138 "data_offset": 0, 00:20:16.138 "data_size": 65536 00:20:16.138 }, 00:20:16.138 { 00:20:16.138 "name": "BaseBdev2", 00:20:16.138 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:16.138 "is_configured": true, 00:20:16.138 "data_offset": 0, 00:20:16.138 "data_size": 65536 00:20:16.138 }, 00:20:16.138 { 00:20:16.138 "name": "BaseBdev3", 00:20:16.138 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:16.138 "is_configured": true, 00:20:16.138 "data_offset": 0, 00:20:16.138 "data_size": 65536 00:20:16.138 } 00:20:16.138 ] 00:20:16.138 }' 00:20:16.138 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=591 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.397 "name": "raid_bdev1", 00:20:16.397 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:16.397 "strip_size_kb": 64, 00:20:16.397 "state": "online", 00:20:16.397 "raid_level": "raid5f", 00:20:16.397 "superblock": false, 00:20:16.397 "num_base_bdevs": 3, 00:20:16.397 "num_base_bdevs_discovered": 3, 00:20:16.397 "num_base_bdevs_operational": 3, 00:20:16.397 "process": { 00:20:16.397 "type": "rebuild", 00:20:16.397 "target": "spare", 00:20:16.397 "progress": { 00:20:16.397 "blocks": 22528, 00:20:16.397 "percent": 17 00:20:16.397 } 00:20:16.397 }, 00:20:16.397 "base_bdevs_list": [ 00:20:16.397 { 00:20:16.397 "name": "spare", 00:20:16.397 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:16.397 "is_configured": true, 00:20:16.397 "data_offset": 0, 00:20:16.397 "data_size": 65536 00:20:16.397 }, 00:20:16.397 { 00:20:16.397 "name": "BaseBdev2", 00:20:16.397 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:16.397 "is_configured": true, 00:20:16.397 "data_offset": 0, 00:20:16.397 "data_size": 65536 00:20:16.397 }, 00:20:16.397 { 00:20:16.397 "name": "BaseBdev3", 00:20:16.397 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:16.397 "is_configured": true, 00:20:16.397 "data_offset": 0, 00:20:16.397 "data_size": 65536 00:20:16.397 } 00:20:16.397 ] 00:20:16.397 }' 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.397 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.656 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.656 10:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.591 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.591 "name": "raid_bdev1", 00:20:17.591 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:17.591 "strip_size_kb": 64, 00:20:17.591 "state": "online", 00:20:17.591 "raid_level": "raid5f", 00:20:17.591 "superblock": false, 00:20:17.591 "num_base_bdevs": 3, 00:20:17.591 "num_base_bdevs_discovered": 3, 00:20:17.591 "num_base_bdevs_operational": 3, 00:20:17.591 "process": { 00:20:17.591 "type": "rebuild", 00:20:17.591 "target": "spare", 00:20:17.591 "progress": { 00:20:17.591 "blocks": 47104, 00:20:17.591 "percent": 35 00:20:17.591 } 00:20:17.591 }, 00:20:17.591 "base_bdevs_list": [ 00:20:17.591 { 00:20:17.592 "name": "spare", 00:20:17.592 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:17.592 "is_configured": true, 00:20:17.592 "data_offset": 0, 00:20:17.592 "data_size": 65536 00:20:17.592 }, 00:20:17.592 { 00:20:17.592 "name": "BaseBdev2", 00:20:17.592 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:17.592 "is_configured": true, 00:20:17.592 "data_offset": 0, 00:20:17.592 "data_size": 65536 00:20:17.592 }, 00:20:17.592 { 00:20:17.592 "name": "BaseBdev3", 00:20:17.592 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:17.592 "is_configured": true, 00:20:17.592 "data_offset": 0, 00:20:17.592 "data_size": 65536 00:20:17.592 } 00:20:17.592 ] 00:20:17.592 }' 00:20:17.592 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.592 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.592 10:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.850 10:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.850 10:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.794 "name": "raid_bdev1", 00:20:18.794 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:18.794 "strip_size_kb": 64, 00:20:18.794 "state": "online", 00:20:18.794 "raid_level": "raid5f", 00:20:18.794 "superblock": false, 00:20:18.794 "num_base_bdevs": 3, 00:20:18.794 "num_base_bdevs_discovered": 3, 00:20:18.794 "num_base_bdevs_operational": 3, 00:20:18.794 "process": { 00:20:18.794 "type": "rebuild", 00:20:18.794 "target": "spare", 00:20:18.794 "progress": { 00:20:18.794 "blocks": 69632, 00:20:18.794 "percent": 53 00:20:18.794 } 00:20:18.794 }, 00:20:18.794 "base_bdevs_list": [ 00:20:18.794 { 00:20:18.794 "name": "spare", 00:20:18.794 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:18.794 "is_configured": true, 00:20:18.794 "data_offset": 0, 00:20:18.794 "data_size": 65536 00:20:18.794 }, 00:20:18.794 { 00:20:18.794 "name": "BaseBdev2", 00:20:18.794 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:18.794 "is_configured": true, 00:20:18.794 "data_offset": 0, 00:20:18.794 "data_size": 65536 00:20:18.794 }, 00:20:18.794 { 00:20:18.794 "name": "BaseBdev3", 00:20:18.794 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:18.794 "is_configured": true, 00:20:18.794 "data_offset": 0, 00:20:18.794 "data_size": 65536 00:20:18.794 } 00:20:18.794 ] 00:20:18.794 }' 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.794 10:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.795 10:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.053 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.054 "name": "raid_bdev1", 00:20:20.054 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:20.054 "strip_size_kb": 64, 00:20:20.054 "state": "online", 00:20:20.054 "raid_level": "raid5f", 00:20:20.054 "superblock": false, 00:20:20.054 "num_base_bdevs": 3, 00:20:20.054 "num_base_bdevs_discovered": 3, 00:20:20.054 "num_base_bdevs_operational": 3, 00:20:20.054 "process": { 00:20:20.054 "type": "rebuild", 00:20:20.054 "target": "spare", 00:20:20.054 "progress": { 00:20:20.054 "blocks": 94208, 00:20:20.054 "percent": 71 00:20:20.054 } 00:20:20.054 }, 00:20:20.054 "base_bdevs_list": [ 00:20:20.054 { 00:20:20.054 "name": "spare", 00:20:20.054 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:20.054 "is_configured": true, 00:20:20.054 "data_offset": 0, 00:20:20.054 "data_size": 65536 00:20:20.054 }, 00:20:20.054 { 00:20:20.054 "name": "BaseBdev2", 00:20:20.054 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:20.054 "is_configured": true, 00:20:20.054 "data_offset": 0, 00:20:20.054 "data_size": 65536 00:20:20.054 }, 00:20:20.054 { 00:20:20.054 "name": "BaseBdev3", 00:20:20.054 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:20.054 "is_configured": true, 00:20:20.054 "data_offset": 0, 00:20:20.054 "data_size": 65536 00:20:20.054 } 00:20:20.054 ] 00:20:20.054 }' 00:20:20.054 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.054 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.054 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.054 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.054 10:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.990 "name": "raid_bdev1", 00:20:20.990 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:20.990 "strip_size_kb": 64, 00:20:20.990 "state": "online", 00:20:20.990 "raid_level": "raid5f", 00:20:20.990 "superblock": false, 00:20:20.990 "num_base_bdevs": 3, 00:20:20.990 "num_base_bdevs_discovered": 3, 00:20:20.990 "num_base_bdevs_operational": 3, 00:20:20.990 "process": { 00:20:20.990 "type": "rebuild", 00:20:20.990 "target": "spare", 00:20:20.990 "progress": { 00:20:20.990 "blocks": 116736, 00:20:20.990 "percent": 89 00:20:20.990 } 00:20:20.990 }, 00:20:20.990 "base_bdevs_list": [ 00:20:20.990 { 00:20:20.990 "name": "spare", 00:20:20.990 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:20.990 "is_configured": true, 00:20:20.990 "data_offset": 0, 00:20:20.990 "data_size": 65536 00:20:20.990 }, 00:20:20.990 { 00:20:20.990 "name": "BaseBdev2", 00:20:20.990 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:20.990 "is_configured": true, 00:20:20.990 "data_offset": 0, 00:20:20.990 "data_size": 65536 00:20:20.990 }, 00:20:20.990 { 00:20:20.990 "name": "BaseBdev3", 00:20:20.990 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:20.990 "is_configured": true, 00:20:20.990 "data_offset": 0, 00:20:20.990 "data_size": 65536 00:20:20.990 } 00:20:20.990 ] 00:20:20.990 }' 00:20:20.990 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.248 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.248 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.248 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.248 10:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:21.818 [2024-10-30 10:48:43.024222] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:21.818 [2024-10-30 10:48:43.024332] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:21.818 [2024-10-30 10:48:43.024672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.385 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.385 "name": "raid_bdev1", 00:20:22.385 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:22.385 "strip_size_kb": 64, 00:20:22.385 "state": "online", 00:20:22.385 "raid_level": "raid5f", 00:20:22.385 "superblock": false, 00:20:22.385 "num_base_bdevs": 3, 00:20:22.386 "num_base_bdevs_discovered": 3, 00:20:22.386 "num_base_bdevs_operational": 3, 00:20:22.386 "base_bdevs_list": [ 00:20:22.386 { 00:20:22.386 "name": "spare", 00:20:22.386 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:22.386 "is_configured": true, 00:20:22.386 "data_offset": 0, 00:20:22.386 "data_size": 65536 00:20:22.386 }, 00:20:22.386 { 00:20:22.386 "name": "BaseBdev2", 00:20:22.386 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:22.386 "is_configured": true, 00:20:22.386 "data_offset": 0, 00:20:22.386 "data_size": 65536 00:20:22.386 }, 00:20:22.386 { 00:20:22.386 "name": "BaseBdev3", 00:20:22.386 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:22.386 "is_configured": true, 00:20:22.386 "data_offset": 0, 00:20:22.386 "data_size": 65536 00:20:22.386 } 00:20:22.386 ] 00:20:22.386 }' 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.386 "name": "raid_bdev1", 00:20:22.386 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:22.386 "strip_size_kb": 64, 00:20:22.386 "state": "online", 00:20:22.386 "raid_level": "raid5f", 00:20:22.386 "superblock": false, 00:20:22.386 "num_base_bdevs": 3, 00:20:22.386 "num_base_bdevs_discovered": 3, 00:20:22.386 "num_base_bdevs_operational": 3, 00:20:22.386 "base_bdevs_list": [ 00:20:22.386 { 00:20:22.386 "name": "spare", 00:20:22.386 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:22.386 "is_configured": true, 00:20:22.386 "data_offset": 0, 00:20:22.386 "data_size": 65536 00:20:22.386 }, 00:20:22.386 { 00:20:22.386 "name": "BaseBdev2", 00:20:22.386 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:22.386 "is_configured": true, 00:20:22.386 "data_offset": 0, 00:20:22.386 "data_size": 65536 00:20:22.386 }, 00:20:22.386 { 00:20:22.386 "name": "BaseBdev3", 00:20:22.386 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:22.386 "is_configured": true, 00:20:22.386 "data_offset": 0, 00:20:22.386 "data_size": 65536 00:20:22.386 } 00:20:22.386 ] 00:20:22.386 }' 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:22.386 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.644 "name": "raid_bdev1", 00:20:22.644 "uuid": "e994532d-1167-4273-8974-587c46d62f6c", 00:20:22.644 "strip_size_kb": 64, 00:20:22.644 "state": "online", 00:20:22.644 "raid_level": "raid5f", 00:20:22.644 "superblock": false, 00:20:22.644 "num_base_bdevs": 3, 00:20:22.644 "num_base_bdevs_discovered": 3, 00:20:22.644 "num_base_bdevs_operational": 3, 00:20:22.644 "base_bdevs_list": [ 00:20:22.644 { 00:20:22.644 "name": "spare", 00:20:22.644 "uuid": "517c4935-e425-5902-81e0-04e229ff9678", 00:20:22.644 "is_configured": true, 00:20:22.644 "data_offset": 0, 00:20:22.644 "data_size": 65536 00:20:22.644 }, 00:20:22.644 { 00:20:22.644 "name": "BaseBdev2", 00:20:22.644 "uuid": "e437a7c9-b3d4-54a8-b833-3c7929556392", 00:20:22.644 "is_configured": true, 00:20:22.644 "data_offset": 0, 00:20:22.644 "data_size": 65536 00:20:22.644 }, 00:20:22.644 { 00:20:22.644 "name": "BaseBdev3", 00:20:22.644 "uuid": "496ac56c-cd50-5d2f-a326-c948ef3007bb", 00:20:22.644 "is_configured": true, 00:20:22.644 "data_offset": 0, 00:20:22.644 "data_size": 65536 00:20:22.644 } 00:20:22.644 ] 00:20:22.644 }' 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.644 10:48:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.212 [2024-10-30 10:48:44.420492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.212 [2024-10-30 10:48:44.420530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.212 [2024-10-30 10:48:44.420631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.212 [2024-10-30 10:48:44.420739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.212 [2024-10-30 10:48:44.420778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.212 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:23.471 /dev/nbd0 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.471 1+0 records in 00:20:23.471 1+0 records out 00:20:23.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268185 s, 15.3 MB/s 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.471 10:48:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:23.730 /dev/nbd1 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.730 1+0 records in 00:20:23.730 1+0 records out 00:20:23.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391955 s, 10.5 MB/s 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.730 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:23.989 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:23.989 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:23.989 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:23.989 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:23.989 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:23.989 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.989 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.248 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:24.507 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:24.507 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:24.507 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:24.507 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.507 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.507 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:24.765 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:24.765 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.765 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:24.765 10:48:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82095 00:20:24.765 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 82095 ']' 00:20:24.765 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 82095 00:20:24.765 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:20:24.765 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:24.765 10:48:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82095 00:20:24.765 killing process with pid 82095 00:20:24.765 Received shutdown signal, test time was about 60.000000 seconds 00:20:24.765 00:20:24.765 Latency(us) 00:20:24.765 [2024-10-30T10:48:46.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.765 [2024-10-30T10:48:46.235Z] =================================================================================================================== 00:20:24.765 [2024-10-30T10:48:46.235Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.765 10:48:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:24.765 10:48:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:24.765 10:48:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82095' 00:20:24.765 10:48:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 82095 00:20:24.765 [2024-10-30 10:48:46.006030] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:24.765 10:48:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 82095 00:20:25.072 [2024-10-30 10:48:46.365853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:26.006 00:20:26.006 real 0m16.484s 00:20:26.006 user 0m21.207s 00:20:26.006 sys 0m1.976s 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.006 ************************************ 00:20:26.006 END TEST raid5f_rebuild_test 00:20:26.006 ************************************ 00:20:26.006 10:48:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:20:26.006 10:48:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:26.006 10:48:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:26.006 10:48:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.006 ************************************ 00:20:26.006 START TEST raid5f_rebuild_test_sb 00:20:26.006 ************************************ 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82547 00:20:26.006 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82547 00:20:26.007 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:26.007 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82547 ']' 00:20:26.007 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.007 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:26.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.007 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.007 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:26.007 10:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.265 [2024-10-30 10:48:47.624086] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:20:26.265 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:26.265 Zero copy mechanism will not be used. 00:20:26.265 [2024-10-30 10:48:47.624334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82547 ] 00:20:26.524 [2024-10-30 10:48:47.810810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.524 [2024-10-30 10:48:47.965848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.783 [2024-10-30 10:48:48.172570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:26.783 [2024-10-30 10:48:48.172652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.350 BaseBdev1_malloc 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.350 [2024-10-30 10:48:48.674139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:27.350 [2024-10-30 10:48:48.674234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.350 [2024-10-30 10:48:48.674279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:27.350 [2024-10-30 10:48:48.674299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.350 [2024-10-30 10:48:48.677193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.350 [2024-10-30 10:48:48.677246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:27.350 BaseBdev1 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.350 BaseBdev2_malloc 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.350 [2024-10-30 10:48:48.726071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:27.350 [2024-10-30 10:48:48.726146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.350 [2024-10-30 10:48:48.726183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:27.350 [2024-10-30 10:48:48.726204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.350 [2024-10-30 10:48:48.729189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.350 [2024-10-30 10:48:48.729240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:27.350 BaseBdev2 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.350 BaseBdev3_malloc 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.350 [2024-10-30 10:48:48.786229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:27.350 [2024-10-30 10:48:48.786301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.350 [2024-10-30 10:48:48.786332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:27.350 [2024-10-30 10:48:48.786351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.350 [2024-10-30 10:48:48.789117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.350 [2024-10-30 10:48:48.789168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:27.350 BaseBdev3 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.350 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.609 spare_malloc 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.609 spare_delay 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.609 [2024-10-30 10:48:48.844678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.609 [2024-10-30 10:48:48.844746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.609 [2024-10-30 10:48:48.844773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:27.609 [2024-10-30 10:48:48.844791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.609 [2024-10-30 10:48:48.847773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.609 [2024-10-30 10:48:48.847843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.609 spare 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.609 [2024-10-30 10:48:48.852842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:27.609 [2024-10-30 10:48:48.855385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.609 [2024-10-30 10:48:48.855482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.609 [2024-10-30 10:48:48.855793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:27.609 [2024-10-30 10:48:48.855824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:27.609 [2024-10-30 10:48:48.856170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:27.609 [2024-10-30 10:48:48.861383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:27.609 [2024-10-30 10:48:48.861433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:27.609 [2024-10-30 10:48:48.861671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.609 "name": "raid_bdev1", 00:20:27.609 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:27.609 "strip_size_kb": 64, 00:20:27.609 "state": "online", 00:20:27.609 "raid_level": "raid5f", 00:20:27.609 "superblock": true, 00:20:27.609 "num_base_bdevs": 3, 00:20:27.609 "num_base_bdevs_discovered": 3, 00:20:27.609 "num_base_bdevs_operational": 3, 00:20:27.609 "base_bdevs_list": [ 00:20:27.609 { 00:20:27.609 "name": "BaseBdev1", 00:20:27.609 "uuid": "59e97f4d-6f68-5748-ac99-127e866a3e29", 00:20:27.609 "is_configured": true, 00:20:27.609 "data_offset": 2048, 00:20:27.609 "data_size": 63488 00:20:27.609 }, 00:20:27.609 { 00:20:27.609 "name": "BaseBdev2", 00:20:27.609 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:27.609 "is_configured": true, 00:20:27.609 "data_offset": 2048, 00:20:27.609 "data_size": 63488 00:20:27.609 }, 00:20:27.609 { 00:20:27.609 "name": "BaseBdev3", 00:20:27.609 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:27.609 "is_configured": true, 00:20:27.609 "data_offset": 2048, 00:20:27.609 "data_size": 63488 00:20:27.609 } 00:20:27.609 ] 00:20:27.609 }' 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.609 10:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.176 [2024-10-30 10:48:49.383729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:28.176 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:28.433 [2024-10-30 10:48:49.743649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:28.433 /dev/nbd0 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.433 1+0 records in 00:20:28.433 1+0 records out 00:20:28.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340832 s, 12.0 MB/s 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:20:28.433 10:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:20:28.998 496+0 records in 00:20:28.998 496+0 records out 00:20:28.998 65011712 bytes (65 MB, 62 MiB) copied, 0.431154 s, 151 MB/s 00:20:28.998 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:28.998 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:28.998 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:28.998 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:28.998 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:28.998 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.998 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:29.258 [2024-10-30 10:48:50.532237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.258 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:29.258 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:29.258 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:29.258 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.258 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.258 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:29.258 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:29.258 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.259 [2024-10-30 10:48:50.562045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.259 "name": "raid_bdev1", 00:20:29.259 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:29.259 "strip_size_kb": 64, 00:20:29.259 "state": "online", 00:20:29.259 "raid_level": "raid5f", 00:20:29.259 "superblock": true, 00:20:29.259 "num_base_bdevs": 3, 00:20:29.259 "num_base_bdevs_discovered": 2, 00:20:29.259 "num_base_bdevs_operational": 2, 00:20:29.259 "base_bdevs_list": [ 00:20:29.259 { 00:20:29.259 "name": null, 00:20:29.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.259 "is_configured": false, 00:20:29.259 "data_offset": 0, 00:20:29.259 "data_size": 63488 00:20:29.259 }, 00:20:29.259 { 00:20:29.259 "name": "BaseBdev2", 00:20:29.259 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:29.259 "is_configured": true, 00:20:29.259 "data_offset": 2048, 00:20:29.259 "data_size": 63488 00:20:29.259 }, 00:20:29.259 { 00:20:29.259 "name": "BaseBdev3", 00:20:29.259 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:29.259 "is_configured": true, 00:20:29.259 "data_offset": 2048, 00:20:29.259 "data_size": 63488 00:20:29.259 } 00:20:29.259 ] 00:20:29.259 }' 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.259 10:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 10:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:29.828 10:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.828 10:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.828 [2024-10-30 10:48:51.066214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.828 [2024-10-30 10:48:51.081871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:20:29.828 10:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.828 10:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:29.828 [2024-10-30 10:48:51.089290] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.764 "name": "raid_bdev1", 00:20:30.764 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:30.764 "strip_size_kb": 64, 00:20:30.764 "state": "online", 00:20:30.764 "raid_level": "raid5f", 00:20:30.764 "superblock": true, 00:20:30.764 "num_base_bdevs": 3, 00:20:30.764 "num_base_bdevs_discovered": 3, 00:20:30.764 "num_base_bdevs_operational": 3, 00:20:30.764 "process": { 00:20:30.764 "type": "rebuild", 00:20:30.764 "target": "spare", 00:20:30.764 "progress": { 00:20:30.764 "blocks": 18432, 00:20:30.764 "percent": 14 00:20:30.764 } 00:20:30.764 }, 00:20:30.764 "base_bdevs_list": [ 00:20:30.764 { 00:20:30.764 "name": "spare", 00:20:30.764 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:30.764 "is_configured": true, 00:20:30.764 "data_offset": 2048, 00:20:30.764 "data_size": 63488 00:20:30.764 }, 00:20:30.764 { 00:20:30.764 "name": "BaseBdev2", 00:20:30.764 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:30.764 "is_configured": true, 00:20:30.764 "data_offset": 2048, 00:20:30.764 "data_size": 63488 00:20:30.764 }, 00:20:30.764 { 00:20:30.764 "name": "BaseBdev3", 00:20:30.764 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:30.764 "is_configured": true, 00:20:30.764 "data_offset": 2048, 00:20:30.764 "data_size": 63488 00:20:30.764 } 00:20:30.764 ] 00:20:30.764 }' 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.764 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.023 [2024-10-30 10:48:52.247019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:31.023 [2024-10-30 10:48:52.304690] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:31.023 [2024-10-30 10:48:52.304798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.023 [2024-10-30 10:48:52.304828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:31.023 [2024-10-30 10:48:52.304839] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.023 "name": "raid_bdev1", 00:20:31.023 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:31.023 "strip_size_kb": 64, 00:20:31.023 "state": "online", 00:20:31.023 "raid_level": "raid5f", 00:20:31.023 "superblock": true, 00:20:31.023 "num_base_bdevs": 3, 00:20:31.023 "num_base_bdevs_discovered": 2, 00:20:31.023 "num_base_bdevs_operational": 2, 00:20:31.023 "base_bdevs_list": [ 00:20:31.023 { 00:20:31.023 "name": null, 00:20:31.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.023 "is_configured": false, 00:20:31.023 "data_offset": 0, 00:20:31.023 "data_size": 63488 00:20:31.023 }, 00:20:31.023 { 00:20:31.023 "name": "BaseBdev2", 00:20:31.023 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:31.023 "is_configured": true, 00:20:31.023 "data_offset": 2048, 00:20:31.023 "data_size": 63488 00:20:31.023 }, 00:20:31.023 { 00:20:31.023 "name": "BaseBdev3", 00:20:31.023 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:31.023 "is_configured": true, 00:20:31.023 "data_offset": 2048, 00:20:31.023 "data_size": 63488 00:20:31.023 } 00:20:31.023 ] 00:20:31.023 }' 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.023 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.591 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.591 "name": "raid_bdev1", 00:20:31.591 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:31.591 "strip_size_kb": 64, 00:20:31.591 "state": "online", 00:20:31.591 "raid_level": "raid5f", 00:20:31.591 "superblock": true, 00:20:31.591 "num_base_bdevs": 3, 00:20:31.591 "num_base_bdevs_discovered": 2, 00:20:31.591 "num_base_bdevs_operational": 2, 00:20:31.591 "base_bdevs_list": [ 00:20:31.591 { 00:20:31.591 "name": null, 00:20:31.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.591 "is_configured": false, 00:20:31.591 "data_offset": 0, 00:20:31.591 "data_size": 63488 00:20:31.591 }, 00:20:31.591 { 00:20:31.591 "name": "BaseBdev2", 00:20:31.591 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:31.591 "is_configured": true, 00:20:31.591 "data_offset": 2048, 00:20:31.591 "data_size": 63488 00:20:31.591 }, 00:20:31.592 { 00:20:31.592 "name": "BaseBdev3", 00:20:31.592 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:31.592 "is_configured": true, 00:20:31.592 "data_offset": 2048, 00:20:31.592 "data_size": 63488 00:20:31.592 } 00:20:31.592 ] 00:20:31.592 }' 00:20:31.592 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.592 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:31.592 10:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.592 10:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:31.592 10:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:31.592 10:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.592 10:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.592 [2024-10-30 10:48:53.035341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:31.592 [2024-10-30 10:48:53.050340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:20:31.592 10:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.592 10:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:31.592 [2024-10-30 10:48:53.057926] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.969 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.969 "name": "raid_bdev1", 00:20:32.970 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:32.970 "strip_size_kb": 64, 00:20:32.970 "state": "online", 00:20:32.970 "raid_level": "raid5f", 00:20:32.970 "superblock": true, 00:20:32.970 "num_base_bdevs": 3, 00:20:32.970 "num_base_bdevs_discovered": 3, 00:20:32.970 "num_base_bdevs_operational": 3, 00:20:32.970 "process": { 00:20:32.970 "type": "rebuild", 00:20:32.970 "target": "spare", 00:20:32.970 "progress": { 00:20:32.970 "blocks": 18432, 00:20:32.970 "percent": 14 00:20:32.970 } 00:20:32.970 }, 00:20:32.970 "base_bdevs_list": [ 00:20:32.970 { 00:20:32.970 "name": "spare", 00:20:32.970 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:32.970 "is_configured": true, 00:20:32.970 "data_offset": 2048, 00:20:32.970 "data_size": 63488 00:20:32.970 }, 00:20:32.970 { 00:20:32.970 "name": "BaseBdev2", 00:20:32.970 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:32.970 "is_configured": true, 00:20:32.970 "data_offset": 2048, 00:20:32.970 "data_size": 63488 00:20:32.970 }, 00:20:32.970 { 00:20:32.970 "name": "BaseBdev3", 00:20:32.970 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:32.970 "is_configured": true, 00:20:32.970 "data_offset": 2048, 00:20:32.970 "data_size": 63488 00:20:32.970 } 00:20:32.970 ] 00:20:32.970 }' 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:32.970 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=608 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.970 "name": "raid_bdev1", 00:20:32.970 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:32.970 "strip_size_kb": 64, 00:20:32.970 "state": "online", 00:20:32.970 "raid_level": "raid5f", 00:20:32.970 "superblock": true, 00:20:32.970 "num_base_bdevs": 3, 00:20:32.970 "num_base_bdevs_discovered": 3, 00:20:32.970 "num_base_bdevs_operational": 3, 00:20:32.970 "process": { 00:20:32.970 "type": "rebuild", 00:20:32.970 "target": "spare", 00:20:32.970 "progress": { 00:20:32.970 "blocks": 22528, 00:20:32.970 "percent": 17 00:20:32.970 } 00:20:32.970 }, 00:20:32.970 "base_bdevs_list": [ 00:20:32.970 { 00:20:32.970 "name": "spare", 00:20:32.970 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:32.970 "is_configured": true, 00:20:32.970 "data_offset": 2048, 00:20:32.970 "data_size": 63488 00:20:32.970 }, 00:20:32.970 { 00:20:32.970 "name": "BaseBdev2", 00:20:32.970 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:32.970 "is_configured": true, 00:20:32.970 "data_offset": 2048, 00:20:32.970 "data_size": 63488 00:20:32.970 }, 00:20:32.970 { 00:20:32.970 "name": "BaseBdev3", 00:20:32.970 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:32.970 "is_configured": true, 00:20:32.970 "data_offset": 2048, 00:20:32.970 "data_size": 63488 00:20:32.970 } 00:20:32.970 ] 00:20:32.970 }' 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.970 10:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.904 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.163 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.163 "name": "raid_bdev1", 00:20:34.163 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:34.163 "strip_size_kb": 64, 00:20:34.163 "state": "online", 00:20:34.163 "raid_level": "raid5f", 00:20:34.163 "superblock": true, 00:20:34.163 "num_base_bdevs": 3, 00:20:34.163 "num_base_bdevs_discovered": 3, 00:20:34.163 "num_base_bdevs_operational": 3, 00:20:34.163 "process": { 00:20:34.163 "type": "rebuild", 00:20:34.163 "target": "spare", 00:20:34.163 "progress": { 00:20:34.163 "blocks": 45056, 00:20:34.163 "percent": 35 00:20:34.163 } 00:20:34.163 }, 00:20:34.163 "base_bdevs_list": [ 00:20:34.163 { 00:20:34.163 "name": "spare", 00:20:34.163 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:34.163 "is_configured": true, 00:20:34.163 "data_offset": 2048, 00:20:34.163 "data_size": 63488 00:20:34.163 }, 00:20:34.163 { 00:20:34.163 "name": "BaseBdev2", 00:20:34.163 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:34.163 "is_configured": true, 00:20:34.163 "data_offset": 2048, 00:20:34.163 "data_size": 63488 00:20:34.163 }, 00:20:34.163 { 00:20:34.163 "name": "BaseBdev3", 00:20:34.163 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:34.163 "is_configured": true, 00:20:34.163 "data_offset": 2048, 00:20:34.163 "data_size": 63488 00:20:34.163 } 00:20:34.163 ] 00:20:34.163 }' 00:20:34.163 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.163 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.163 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.163 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.163 10:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:35.099 "name": "raid_bdev1", 00:20:35.099 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:35.099 "strip_size_kb": 64, 00:20:35.099 "state": "online", 00:20:35.099 "raid_level": "raid5f", 00:20:35.099 "superblock": true, 00:20:35.099 "num_base_bdevs": 3, 00:20:35.099 "num_base_bdevs_discovered": 3, 00:20:35.099 "num_base_bdevs_operational": 3, 00:20:35.099 "process": { 00:20:35.099 "type": "rebuild", 00:20:35.099 "target": "spare", 00:20:35.099 "progress": { 00:20:35.099 "blocks": 69632, 00:20:35.099 "percent": 54 00:20:35.099 } 00:20:35.099 }, 00:20:35.099 "base_bdevs_list": [ 00:20:35.099 { 00:20:35.099 "name": "spare", 00:20:35.099 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:35.099 "is_configured": true, 00:20:35.099 "data_offset": 2048, 00:20:35.099 "data_size": 63488 00:20:35.099 }, 00:20:35.099 { 00:20:35.099 "name": "BaseBdev2", 00:20:35.099 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:35.099 "is_configured": true, 00:20:35.099 "data_offset": 2048, 00:20:35.099 "data_size": 63488 00:20:35.099 }, 00:20:35.099 { 00:20:35.099 "name": "BaseBdev3", 00:20:35.099 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:35.099 "is_configured": true, 00:20:35.099 "data_offset": 2048, 00:20:35.099 "data_size": 63488 00:20:35.099 } 00:20:35.099 ] 00:20:35.099 }' 00:20:35.099 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:35.358 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:35.358 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:35.358 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:35.358 10:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.317 "name": "raid_bdev1", 00:20:36.317 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:36.317 "strip_size_kb": 64, 00:20:36.317 "state": "online", 00:20:36.317 "raid_level": "raid5f", 00:20:36.317 "superblock": true, 00:20:36.317 "num_base_bdevs": 3, 00:20:36.317 "num_base_bdevs_discovered": 3, 00:20:36.317 "num_base_bdevs_operational": 3, 00:20:36.317 "process": { 00:20:36.317 "type": "rebuild", 00:20:36.317 "target": "spare", 00:20:36.317 "progress": { 00:20:36.317 "blocks": 92160, 00:20:36.317 "percent": 72 00:20:36.317 } 00:20:36.317 }, 00:20:36.317 "base_bdevs_list": [ 00:20:36.317 { 00:20:36.317 "name": "spare", 00:20:36.317 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:36.317 "is_configured": true, 00:20:36.317 "data_offset": 2048, 00:20:36.317 "data_size": 63488 00:20:36.317 }, 00:20:36.317 { 00:20:36.317 "name": "BaseBdev2", 00:20:36.317 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:36.317 "is_configured": true, 00:20:36.317 "data_offset": 2048, 00:20:36.317 "data_size": 63488 00:20:36.317 }, 00:20:36.317 { 00:20:36.317 "name": "BaseBdev3", 00:20:36.317 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:36.317 "is_configured": true, 00:20:36.317 "data_offset": 2048, 00:20:36.317 "data_size": 63488 00:20:36.317 } 00:20:36.317 ] 00:20:36.317 }' 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.317 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.576 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.576 10:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.514 "name": "raid_bdev1", 00:20:37.514 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:37.514 "strip_size_kb": 64, 00:20:37.514 "state": "online", 00:20:37.514 "raid_level": "raid5f", 00:20:37.514 "superblock": true, 00:20:37.514 "num_base_bdevs": 3, 00:20:37.514 "num_base_bdevs_discovered": 3, 00:20:37.514 "num_base_bdevs_operational": 3, 00:20:37.514 "process": { 00:20:37.514 "type": "rebuild", 00:20:37.514 "target": "spare", 00:20:37.514 "progress": { 00:20:37.514 "blocks": 114688, 00:20:37.514 "percent": 90 00:20:37.514 } 00:20:37.514 }, 00:20:37.514 "base_bdevs_list": [ 00:20:37.514 { 00:20:37.514 "name": "spare", 00:20:37.514 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:37.514 "is_configured": true, 00:20:37.514 "data_offset": 2048, 00:20:37.514 "data_size": 63488 00:20:37.514 }, 00:20:37.514 { 00:20:37.514 "name": "BaseBdev2", 00:20:37.514 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:37.514 "is_configured": true, 00:20:37.514 "data_offset": 2048, 00:20:37.514 "data_size": 63488 00:20:37.514 }, 00:20:37.514 { 00:20:37.514 "name": "BaseBdev3", 00:20:37.514 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:37.514 "is_configured": true, 00:20:37.514 "data_offset": 2048, 00:20:37.514 "data_size": 63488 00:20:37.514 } 00:20:37.514 ] 00:20:37.514 }' 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.514 10:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:38.089 [2024-10-30 10:48:59.330603] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:38.089 [2024-10-30 10:48:59.330733] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:38.089 [2024-10-30 10:48:59.330891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.657 10:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.657 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.657 "name": "raid_bdev1", 00:20:38.657 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:38.657 "strip_size_kb": 64, 00:20:38.657 "state": "online", 00:20:38.657 "raid_level": "raid5f", 00:20:38.657 "superblock": true, 00:20:38.657 "num_base_bdevs": 3, 00:20:38.657 "num_base_bdevs_discovered": 3, 00:20:38.657 "num_base_bdevs_operational": 3, 00:20:38.657 "base_bdevs_list": [ 00:20:38.657 { 00:20:38.657 "name": "spare", 00:20:38.657 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:38.657 "is_configured": true, 00:20:38.657 "data_offset": 2048, 00:20:38.657 "data_size": 63488 00:20:38.657 }, 00:20:38.657 { 00:20:38.657 "name": "BaseBdev2", 00:20:38.657 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:38.657 "is_configured": true, 00:20:38.657 "data_offset": 2048, 00:20:38.657 "data_size": 63488 00:20:38.657 }, 00:20:38.657 { 00:20:38.657 "name": "BaseBdev3", 00:20:38.657 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:38.657 "is_configured": true, 00:20:38.657 "data_offset": 2048, 00:20:38.657 "data_size": 63488 00:20:38.657 } 00:20:38.657 ] 00:20:38.657 }' 00:20:38.657 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.657 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:38.657 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.917 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:38.917 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:38.917 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:38.917 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.917 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:38.917 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:38.917 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.917 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.918 "name": "raid_bdev1", 00:20:38.918 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:38.918 "strip_size_kb": 64, 00:20:38.918 "state": "online", 00:20:38.918 "raid_level": "raid5f", 00:20:38.918 "superblock": true, 00:20:38.918 "num_base_bdevs": 3, 00:20:38.918 "num_base_bdevs_discovered": 3, 00:20:38.918 "num_base_bdevs_operational": 3, 00:20:38.918 "base_bdevs_list": [ 00:20:38.918 { 00:20:38.918 "name": "spare", 00:20:38.918 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:38.918 "is_configured": true, 00:20:38.918 "data_offset": 2048, 00:20:38.918 "data_size": 63488 00:20:38.918 }, 00:20:38.918 { 00:20:38.918 "name": "BaseBdev2", 00:20:38.918 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:38.918 "is_configured": true, 00:20:38.918 "data_offset": 2048, 00:20:38.918 "data_size": 63488 00:20:38.918 }, 00:20:38.918 { 00:20:38.918 "name": "BaseBdev3", 00:20:38.918 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:38.918 "is_configured": true, 00:20:38.918 "data_offset": 2048, 00:20:38.918 "data_size": 63488 00:20:38.918 } 00:20:38.918 ] 00:20:38.918 }' 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.918 "name": "raid_bdev1", 00:20:38.918 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:38.918 "strip_size_kb": 64, 00:20:38.918 "state": "online", 00:20:38.918 "raid_level": "raid5f", 00:20:38.918 "superblock": true, 00:20:38.918 "num_base_bdevs": 3, 00:20:38.918 "num_base_bdevs_discovered": 3, 00:20:38.918 "num_base_bdevs_operational": 3, 00:20:38.918 "base_bdevs_list": [ 00:20:38.918 { 00:20:38.918 "name": "spare", 00:20:38.918 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:38.918 "is_configured": true, 00:20:38.918 "data_offset": 2048, 00:20:38.918 "data_size": 63488 00:20:38.918 }, 00:20:38.918 { 00:20:38.918 "name": "BaseBdev2", 00:20:38.918 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:38.918 "is_configured": true, 00:20:38.918 "data_offset": 2048, 00:20:38.918 "data_size": 63488 00:20:38.918 }, 00:20:38.918 { 00:20:38.918 "name": "BaseBdev3", 00:20:38.918 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:38.918 "is_configured": true, 00:20:38.918 "data_offset": 2048, 00:20:38.918 "data_size": 63488 00:20:38.918 } 00:20:38.918 ] 00:20:38.918 }' 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.918 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.486 [2024-10-30 10:49:00.843226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.486 [2024-10-30 10:49:00.843266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.486 [2024-10-30 10:49:00.843375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.486 [2024-10-30 10:49:00.843474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.486 [2024-10-30 10:49:00.843498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:39.486 10:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:39.744 /dev/nbd0 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:39.744 1+0 records in 00:20:39.744 1+0 records out 00:20:39.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344816 s, 11.9 MB/s 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:39.744 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:20:39.745 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:39.745 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:39.745 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:20:39.745 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:39.745 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:39.745 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:40.313 /dev/nbd1 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:40.313 1+0 records in 00:20:40.313 1+0 records out 00:20:40.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479811 s, 8.5 MB/s 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.313 10:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:40.880 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.139 [2024-10-30 10:49:02.430543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:41.139 [2024-10-30 10:49:02.430621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.139 [2024-10-30 10:49:02.430654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:41.139 [2024-10-30 10:49:02.430673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.139 [2024-10-30 10:49:02.433791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.139 [2024-10-30 10:49:02.433845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:41.139 [2024-10-30 10:49:02.433954] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:41.139 [2024-10-30 10:49:02.434053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:41.139 [2024-10-30 10:49:02.434228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:41.139 [2024-10-30 10:49:02.434378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.139 spare 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.139 [2024-10-30 10:49:02.534527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:41.139 [2024-10-30 10:49:02.534596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:41.139 [2024-10-30 10:49:02.535047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:20:41.139 [2024-10-30 10:49:02.539698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:41.139 [2024-10-30 10:49:02.539900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:41.139 [2024-10-30 10:49:02.540248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.139 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.398 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.398 "name": "raid_bdev1", 00:20:41.398 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:41.398 "strip_size_kb": 64, 00:20:41.398 "state": "online", 00:20:41.398 "raid_level": "raid5f", 00:20:41.398 "superblock": true, 00:20:41.398 "num_base_bdevs": 3, 00:20:41.398 "num_base_bdevs_discovered": 3, 00:20:41.398 "num_base_bdevs_operational": 3, 00:20:41.398 "base_bdevs_list": [ 00:20:41.398 { 00:20:41.398 "name": "spare", 00:20:41.398 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:41.398 "is_configured": true, 00:20:41.398 "data_offset": 2048, 00:20:41.398 "data_size": 63488 00:20:41.398 }, 00:20:41.398 { 00:20:41.398 "name": "BaseBdev2", 00:20:41.398 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:41.398 "is_configured": true, 00:20:41.398 "data_offset": 2048, 00:20:41.398 "data_size": 63488 00:20:41.398 }, 00:20:41.398 { 00:20:41.398 "name": "BaseBdev3", 00:20:41.398 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:41.398 "is_configured": true, 00:20:41.398 "data_offset": 2048, 00:20:41.398 "data_size": 63488 00:20:41.398 } 00:20:41.398 ] 00:20:41.398 }' 00:20:41.398 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.398 10:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.657 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.987 "name": "raid_bdev1", 00:20:41.987 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:41.987 "strip_size_kb": 64, 00:20:41.987 "state": "online", 00:20:41.987 "raid_level": "raid5f", 00:20:41.987 "superblock": true, 00:20:41.987 "num_base_bdevs": 3, 00:20:41.987 "num_base_bdevs_discovered": 3, 00:20:41.987 "num_base_bdevs_operational": 3, 00:20:41.987 "base_bdevs_list": [ 00:20:41.987 { 00:20:41.987 "name": "spare", 00:20:41.987 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:41.987 "is_configured": true, 00:20:41.987 "data_offset": 2048, 00:20:41.987 "data_size": 63488 00:20:41.987 }, 00:20:41.987 { 00:20:41.987 "name": "BaseBdev2", 00:20:41.987 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:41.987 "is_configured": true, 00:20:41.987 "data_offset": 2048, 00:20:41.987 "data_size": 63488 00:20:41.987 }, 00:20:41.987 { 00:20:41.987 "name": "BaseBdev3", 00:20:41.987 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:41.987 "is_configured": true, 00:20:41.987 "data_offset": 2048, 00:20:41.987 "data_size": 63488 00:20:41.987 } 00:20:41.987 ] 00:20:41.987 }' 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.987 [2024-10-30 10:49:03.297952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.987 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.987 "name": "raid_bdev1", 00:20:41.987 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:41.987 "strip_size_kb": 64, 00:20:41.987 "state": "online", 00:20:41.987 "raid_level": "raid5f", 00:20:41.987 "superblock": true, 00:20:41.987 "num_base_bdevs": 3, 00:20:41.987 "num_base_bdevs_discovered": 2, 00:20:41.987 "num_base_bdevs_operational": 2, 00:20:41.987 "base_bdevs_list": [ 00:20:41.987 { 00:20:41.987 "name": null, 00:20:41.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.987 "is_configured": false, 00:20:41.987 "data_offset": 0, 00:20:41.988 "data_size": 63488 00:20:41.988 }, 00:20:41.988 { 00:20:41.988 "name": "BaseBdev2", 00:20:41.988 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:41.988 "is_configured": true, 00:20:41.988 "data_offset": 2048, 00:20:41.988 "data_size": 63488 00:20:41.988 }, 00:20:41.988 { 00:20:41.988 "name": "BaseBdev3", 00:20:41.988 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:41.988 "is_configured": true, 00:20:41.988 "data_offset": 2048, 00:20:41.988 "data_size": 63488 00:20:41.988 } 00:20:41.988 ] 00:20:41.988 }' 00:20:41.988 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.988 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.555 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:42.555 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.555 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.555 [2024-10-30 10:49:03.818140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:42.555 [2024-10-30 10:49:03.818397] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:42.555 [2024-10-30 10:49:03.818425] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:42.555 [2024-10-30 10:49:03.818483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:42.555 [2024-10-30 10:49:03.833045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:20:42.555 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.555 10:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:42.555 [2024-10-30 10:49:03.840250] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:43.491 "name": "raid_bdev1", 00:20:43.491 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:43.491 "strip_size_kb": 64, 00:20:43.491 "state": "online", 00:20:43.491 "raid_level": "raid5f", 00:20:43.491 "superblock": true, 00:20:43.491 "num_base_bdevs": 3, 00:20:43.491 "num_base_bdevs_discovered": 3, 00:20:43.491 "num_base_bdevs_operational": 3, 00:20:43.491 "process": { 00:20:43.491 "type": "rebuild", 00:20:43.491 "target": "spare", 00:20:43.491 "progress": { 00:20:43.491 "blocks": 18432, 00:20:43.491 "percent": 14 00:20:43.491 } 00:20:43.491 }, 00:20:43.491 "base_bdevs_list": [ 00:20:43.491 { 00:20:43.491 "name": "spare", 00:20:43.491 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:43.491 "is_configured": true, 00:20:43.491 "data_offset": 2048, 00:20:43.491 "data_size": 63488 00:20:43.491 }, 00:20:43.491 { 00:20:43.491 "name": "BaseBdev2", 00:20:43.491 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:43.491 "is_configured": true, 00:20:43.491 "data_offset": 2048, 00:20:43.491 "data_size": 63488 00:20:43.491 }, 00:20:43.491 { 00:20:43.491 "name": "BaseBdev3", 00:20:43.491 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:43.491 "is_configured": true, 00:20:43.491 "data_offset": 2048, 00:20:43.491 "data_size": 63488 00:20:43.491 } 00:20:43.491 ] 00:20:43.491 }' 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:43.491 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.751 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:43.751 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:43.751 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.751 10:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.751 [2024-10-30 10:49:04.998265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.751 [2024-10-30 10:49:05.055379] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:43.751 [2024-10-30 10:49:05.055642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.751 [2024-10-30 10:49:05.055775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:43.751 [2024-10-30 10:49:05.055832] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.751 "name": "raid_bdev1", 00:20:43.751 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:43.751 "strip_size_kb": 64, 00:20:43.751 "state": "online", 00:20:43.751 "raid_level": "raid5f", 00:20:43.751 "superblock": true, 00:20:43.751 "num_base_bdevs": 3, 00:20:43.751 "num_base_bdevs_discovered": 2, 00:20:43.751 "num_base_bdevs_operational": 2, 00:20:43.751 "base_bdevs_list": [ 00:20:43.751 { 00:20:43.751 "name": null, 00:20:43.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.751 "is_configured": false, 00:20:43.751 "data_offset": 0, 00:20:43.751 "data_size": 63488 00:20:43.751 }, 00:20:43.751 { 00:20:43.751 "name": "BaseBdev2", 00:20:43.751 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:43.751 "is_configured": true, 00:20:43.751 "data_offset": 2048, 00:20:43.751 "data_size": 63488 00:20:43.751 }, 00:20:43.751 { 00:20:43.751 "name": "BaseBdev3", 00:20:43.751 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:43.751 "is_configured": true, 00:20:43.751 "data_offset": 2048, 00:20:43.751 "data_size": 63488 00:20:43.751 } 00:20:43.751 ] 00:20:43.751 }' 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.751 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.320 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:44.320 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.320 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.320 [2024-10-30 10:49:05.611704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:44.320 [2024-10-30 10:49:05.611801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.320 [2024-10-30 10:49:05.611833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:44.320 [2024-10-30 10:49:05.611856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.320 [2024-10-30 10:49:05.612484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.320 [2024-10-30 10:49:05.612538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:44.320 [2024-10-30 10:49:05.612675] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:44.320 [2024-10-30 10:49:05.612699] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:44.320 [2024-10-30 10:49:05.612713] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:44.320 [2024-10-30 10:49:05.612747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:44.320 [2024-10-30 10:49:05.627323] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:20:44.320 spare 00:20:44.320 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.320 10:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:44.320 [2024-10-30 10:49:05.634357] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.256 "name": "raid_bdev1", 00:20:45.256 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:45.256 "strip_size_kb": 64, 00:20:45.256 "state": "online", 00:20:45.256 "raid_level": "raid5f", 00:20:45.256 "superblock": true, 00:20:45.256 "num_base_bdevs": 3, 00:20:45.256 "num_base_bdevs_discovered": 3, 00:20:45.256 "num_base_bdevs_operational": 3, 00:20:45.256 "process": { 00:20:45.256 "type": "rebuild", 00:20:45.256 "target": "spare", 00:20:45.256 "progress": { 00:20:45.256 "blocks": 18432, 00:20:45.256 "percent": 14 00:20:45.256 } 00:20:45.256 }, 00:20:45.256 "base_bdevs_list": [ 00:20:45.256 { 00:20:45.256 "name": "spare", 00:20:45.256 "uuid": "a9d396c1-25c8-58f2-ba42-32d01cc6bf4d", 00:20:45.256 "is_configured": true, 00:20:45.256 "data_offset": 2048, 00:20:45.256 "data_size": 63488 00:20:45.256 }, 00:20:45.256 { 00:20:45.256 "name": "BaseBdev2", 00:20:45.256 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:45.256 "is_configured": true, 00:20:45.256 "data_offset": 2048, 00:20:45.256 "data_size": 63488 00:20:45.256 }, 00:20:45.256 { 00:20:45.256 "name": "BaseBdev3", 00:20:45.256 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:45.256 "is_configured": true, 00:20:45.256 "data_offset": 2048, 00:20:45.256 "data_size": 63488 00:20:45.256 } 00:20:45.256 ] 00:20:45.256 }' 00:20:45.256 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.515 [2024-10-30 10:49:06.792752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.515 [2024-10-30 10:49:06.849087] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:45.515 [2024-10-30 10:49:06.849194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.515 [2024-10-30 10:49:06.849224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.515 [2024-10-30 10:49:06.849236] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.515 "name": "raid_bdev1", 00:20:45.515 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:45.515 "strip_size_kb": 64, 00:20:45.515 "state": "online", 00:20:45.515 "raid_level": "raid5f", 00:20:45.515 "superblock": true, 00:20:45.515 "num_base_bdevs": 3, 00:20:45.515 "num_base_bdevs_discovered": 2, 00:20:45.515 "num_base_bdevs_operational": 2, 00:20:45.515 "base_bdevs_list": [ 00:20:45.515 { 00:20:45.515 "name": null, 00:20:45.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.515 "is_configured": false, 00:20:45.515 "data_offset": 0, 00:20:45.515 "data_size": 63488 00:20:45.515 }, 00:20:45.515 { 00:20:45.515 "name": "BaseBdev2", 00:20:45.515 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:45.515 "is_configured": true, 00:20:45.515 "data_offset": 2048, 00:20:45.515 "data_size": 63488 00:20:45.515 }, 00:20:45.515 { 00:20:45.515 "name": "BaseBdev3", 00:20:45.515 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:45.515 "is_configured": true, 00:20:45.515 "data_offset": 2048, 00:20:45.515 "data_size": 63488 00:20:45.515 } 00:20:45.515 ] 00:20:45.515 }' 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.515 10:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:46.084 "name": "raid_bdev1", 00:20:46.084 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:46.084 "strip_size_kb": 64, 00:20:46.084 "state": "online", 00:20:46.084 "raid_level": "raid5f", 00:20:46.084 "superblock": true, 00:20:46.084 "num_base_bdevs": 3, 00:20:46.084 "num_base_bdevs_discovered": 2, 00:20:46.084 "num_base_bdevs_operational": 2, 00:20:46.084 "base_bdevs_list": [ 00:20:46.084 { 00:20:46.084 "name": null, 00:20:46.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.084 "is_configured": false, 00:20:46.084 "data_offset": 0, 00:20:46.084 "data_size": 63488 00:20:46.084 }, 00:20:46.084 { 00:20:46.084 "name": "BaseBdev2", 00:20:46.084 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:46.084 "is_configured": true, 00:20:46.084 "data_offset": 2048, 00:20:46.084 "data_size": 63488 00:20:46.084 }, 00:20:46.084 { 00:20:46.084 "name": "BaseBdev3", 00:20:46.084 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:46.084 "is_configured": true, 00:20:46.084 "data_offset": 2048, 00:20:46.084 "data_size": 63488 00:20:46.084 } 00:20:46.084 ] 00:20:46.084 }' 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:46.084 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.342 [2024-10-30 10:49:07.613087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:46.342 [2024-10-30 10:49:07.613167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.342 [2024-10-30 10:49:07.613211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:46.342 [2024-10-30 10:49:07.613230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.342 [2024-10-30 10:49:07.613900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.342 [2024-10-30 10:49:07.613952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:46.342 [2024-10-30 10:49:07.614100] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:46.342 [2024-10-30 10:49:07.614136] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:46.342 [2024-10-30 10:49:07.614166] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:46.342 [2024-10-30 10:49:07.614183] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:46.342 BaseBdev1 00:20:46.342 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.343 10:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.351 "name": "raid_bdev1", 00:20:47.351 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:47.351 "strip_size_kb": 64, 00:20:47.351 "state": "online", 00:20:47.351 "raid_level": "raid5f", 00:20:47.351 "superblock": true, 00:20:47.351 "num_base_bdevs": 3, 00:20:47.351 "num_base_bdevs_discovered": 2, 00:20:47.351 "num_base_bdevs_operational": 2, 00:20:47.351 "base_bdevs_list": [ 00:20:47.351 { 00:20:47.351 "name": null, 00:20:47.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.351 "is_configured": false, 00:20:47.351 "data_offset": 0, 00:20:47.351 "data_size": 63488 00:20:47.351 }, 00:20:47.351 { 00:20:47.351 "name": "BaseBdev2", 00:20:47.351 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:47.351 "is_configured": true, 00:20:47.351 "data_offset": 2048, 00:20:47.351 "data_size": 63488 00:20:47.351 }, 00:20:47.351 { 00:20:47.351 "name": "BaseBdev3", 00:20:47.351 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:47.351 "is_configured": true, 00:20:47.351 "data_offset": 2048, 00:20:47.351 "data_size": 63488 00:20:47.351 } 00:20:47.351 ] 00:20:47.351 }' 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.351 10:49:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.917 "name": "raid_bdev1", 00:20:47.917 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:47.917 "strip_size_kb": 64, 00:20:47.917 "state": "online", 00:20:47.917 "raid_level": "raid5f", 00:20:47.917 "superblock": true, 00:20:47.917 "num_base_bdevs": 3, 00:20:47.917 "num_base_bdevs_discovered": 2, 00:20:47.917 "num_base_bdevs_operational": 2, 00:20:47.917 "base_bdevs_list": [ 00:20:47.917 { 00:20:47.917 "name": null, 00:20:47.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.917 "is_configured": false, 00:20:47.917 "data_offset": 0, 00:20:47.917 "data_size": 63488 00:20:47.917 }, 00:20:47.917 { 00:20:47.917 "name": "BaseBdev2", 00:20:47.917 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:47.917 "is_configured": true, 00:20:47.917 "data_offset": 2048, 00:20:47.917 "data_size": 63488 00:20:47.917 }, 00:20:47.917 { 00:20:47.917 "name": "BaseBdev3", 00:20:47.917 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:47.917 "is_configured": true, 00:20:47.917 "data_offset": 2048, 00:20:47.917 "data_size": 63488 00:20:47.917 } 00:20:47.917 ] 00:20:47.917 }' 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:47.917 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.918 [2024-10-30 10:49:09.289697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.918 [2024-10-30 10:49:09.289954] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:47.918 [2024-10-30 10:49:09.289978] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:47.918 request: 00:20:47.918 { 00:20:47.918 "base_bdev": "BaseBdev1", 00:20:47.918 "raid_bdev": "raid_bdev1", 00:20:47.918 "method": "bdev_raid_add_base_bdev", 00:20:47.918 "req_id": 1 00:20:47.918 } 00:20:47.918 Got JSON-RPC error response 00:20:47.918 response: 00:20:47.918 { 00:20:47.918 "code": -22, 00:20:47.918 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:47.918 } 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:47.918 10:49:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.852 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.109 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.109 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.109 "name": "raid_bdev1", 00:20:49.109 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:49.109 "strip_size_kb": 64, 00:20:49.109 "state": "online", 00:20:49.109 "raid_level": "raid5f", 00:20:49.109 "superblock": true, 00:20:49.109 "num_base_bdevs": 3, 00:20:49.109 "num_base_bdevs_discovered": 2, 00:20:49.109 "num_base_bdevs_operational": 2, 00:20:49.109 "base_bdevs_list": [ 00:20:49.109 { 00:20:49.109 "name": null, 00:20:49.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.109 "is_configured": false, 00:20:49.109 "data_offset": 0, 00:20:49.109 "data_size": 63488 00:20:49.109 }, 00:20:49.109 { 00:20:49.109 "name": "BaseBdev2", 00:20:49.109 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:49.109 "is_configured": true, 00:20:49.109 "data_offset": 2048, 00:20:49.109 "data_size": 63488 00:20:49.109 }, 00:20:49.109 { 00:20:49.109 "name": "BaseBdev3", 00:20:49.109 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:49.109 "is_configured": true, 00:20:49.109 "data_offset": 2048, 00:20:49.109 "data_size": 63488 00:20:49.109 } 00:20:49.109 ] 00:20:49.109 }' 00:20:49.109 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.109 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.368 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.368 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.368 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:49.368 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:49.368 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.368 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.368 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.368 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.368 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.627 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.628 "name": "raid_bdev1", 00:20:49.628 "uuid": "045c14eb-3cbe-455e-81cd-7ea6249a37a3", 00:20:49.628 "strip_size_kb": 64, 00:20:49.628 "state": "online", 00:20:49.628 "raid_level": "raid5f", 00:20:49.628 "superblock": true, 00:20:49.628 "num_base_bdevs": 3, 00:20:49.628 "num_base_bdevs_discovered": 2, 00:20:49.628 "num_base_bdevs_operational": 2, 00:20:49.628 "base_bdevs_list": [ 00:20:49.628 { 00:20:49.628 "name": null, 00:20:49.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.628 "is_configured": false, 00:20:49.628 "data_offset": 0, 00:20:49.628 "data_size": 63488 00:20:49.628 }, 00:20:49.628 { 00:20:49.628 "name": "BaseBdev2", 00:20:49.628 "uuid": "df9efc66-7d67-5bcf-be88-ae1ed95c9dd5", 00:20:49.628 "is_configured": true, 00:20:49.628 "data_offset": 2048, 00:20:49.628 "data_size": 63488 00:20:49.628 }, 00:20:49.628 { 00:20:49.628 "name": "BaseBdev3", 00:20:49.628 "uuid": "4407606c-6a2e-5087-ab60-14869ffa0143", 00:20:49.628 "is_configured": true, 00:20:49.628 "data_offset": 2048, 00:20:49.628 "data_size": 63488 00:20:49.628 } 00:20:49.628 ] 00:20:49.628 }' 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82547 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82547 ']' 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82547 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:20:49.628 10:49:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:49.628 10:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82547 00:20:49.628 10:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:49.628 10:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:49.628 killing process with pid 82547 00:20:49.628 10:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82547' 00:20:49.628 Received shutdown signal, test time was about 60.000000 seconds 00:20:49.628 00:20:49.628 Latency(us) 00:20:49.628 [2024-10-30T10:49:11.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.628 [2024-10-30T10:49:11.098Z] =================================================================================================================== 00:20:49.628 [2024-10-30T10:49:11.098Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:49.628 10:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82547 00:20:49.628 [2024-10-30 10:49:11.025370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:49.628 [2024-10-30 10:49:11.025525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:49.628 [2024-10-30 10:49:11.025605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:49.628 [2024-10-30 10:49:11.025625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:49.628 10:49:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82547 00:20:50.195 [2024-10-30 10:49:11.377024] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:51.131 10:49:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:51.131 00:20:51.131 real 0m25.017s 00:20:51.131 user 0m33.317s 00:20:51.131 sys 0m2.672s 00:20:51.131 10:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:51.131 10:49:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.131 ************************************ 00:20:51.131 END TEST raid5f_rebuild_test_sb 00:20:51.131 ************************************ 00:20:51.131 10:49:12 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:20:51.131 10:49:12 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:20:51.131 10:49:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:51.131 10:49:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:51.131 10:49:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.131 ************************************ 00:20:51.131 START TEST raid5f_state_function_test 00:20:51.131 ************************************ 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83310 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:51.131 Process raid pid: 83310 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83310' 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83310 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83310 ']' 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:51.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:51.131 10:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.390 [2024-10-30 10:49:12.618463] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:20:51.390 [2024-10-30 10:49:12.618639] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.390 [2024-10-30 10:49:12.804591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.649 [2024-10-30 10:49:12.943692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.908 [2024-10-30 10:49:13.157413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.908 [2024-10-30 10:49:13.157526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.508 [2024-10-30 10:49:13.707416] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:52.508 [2024-10-30 10:49:13.707480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:52.508 [2024-10-30 10:49:13.707496] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:52.508 [2024-10-30 10:49:13.707513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.508 [2024-10-30 10:49:13.707532] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:52.508 [2024-10-30 10:49:13.707546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:52.508 [2024-10-30 10:49:13.707555] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:52.508 [2024-10-30 10:49:13.707570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.508 "name": "Existed_Raid", 00:20:52.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.508 "strip_size_kb": 64, 00:20:52.508 "state": "configuring", 00:20:52.508 "raid_level": "raid5f", 00:20:52.508 "superblock": false, 00:20:52.508 "num_base_bdevs": 4, 00:20:52.508 "num_base_bdevs_discovered": 0, 00:20:52.508 "num_base_bdevs_operational": 4, 00:20:52.508 "base_bdevs_list": [ 00:20:52.508 { 00:20:52.508 "name": "BaseBdev1", 00:20:52.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.508 "is_configured": false, 00:20:52.508 "data_offset": 0, 00:20:52.508 "data_size": 0 00:20:52.508 }, 00:20:52.508 { 00:20:52.508 "name": "BaseBdev2", 00:20:52.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.508 "is_configured": false, 00:20:52.508 "data_offset": 0, 00:20:52.508 "data_size": 0 00:20:52.508 }, 00:20:52.508 { 00:20:52.508 "name": "BaseBdev3", 00:20:52.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.508 "is_configured": false, 00:20:52.508 "data_offset": 0, 00:20:52.508 "data_size": 0 00:20:52.508 }, 00:20:52.508 { 00:20:52.508 "name": "BaseBdev4", 00:20:52.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.508 "is_configured": false, 00:20:52.508 "data_offset": 0, 00:20:52.508 "data_size": 0 00:20:52.508 } 00:20:52.508 ] 00:20:52.508 }' 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.508 10:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.768 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:52.768 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.768 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.768 [2024-10-30 10:49:14.195481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:52.768 [2024-10-30 10:49:14.195581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:52.768 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.768 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:52.768 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.768 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.768 [2024-10-30 10:49:14.203458] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:52.768 [2024-10-30 10:49:14.203577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:52.768 [2024-10-30 10:49:14.203591] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:52.768 [2024-10-30 10:49:14.203605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.768 [2024-10-30 10:49:14.203614] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:52.768 [2024-10-30 10:49:14.203626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:52.768 [2024-10-30 10:49:14.203635] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:52.768 [2024-10-30 10:49:14.203648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:52.769 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.769 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:52.769 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.769 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.028 [2024-10-30 10:49:14.250326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:53.028 BaseBdev1 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.028 [ 00:20:53.028 { 00:20:53.028 "name": "BaseBdev1", 00:20:53.028 "aliases": [ 00:20:53.028 "9b534127-6634-4c3c-9e5f-ad606d09b54d" 00:20:53.028 ], 00:20:53.028 "product_name": "Malloc disk", 00:20:53.028 "block_size": 512, 00:20:53.028 "num_blocks": 65536, 00:20:53.028 "uuid": "9b534127-6634-4c3c-9e5f-ad606d09b54d", 00:20:53.028 "assigned_rate_limits": { 00:20:53.028 "rw_ios_per_sec": 0, 00:20:53.028 "rw_mbytes_per_sec": 0, 00:20:53.028 "r_mbytes_per_sec": 0, 00:20:53.028 "w_mbytes_per_sec": 0 00:20:53.028 }, 00:20:53.028 "claimed": true, 00:20:53.028 "claim_type": "exclusive_write", 00:20:53.028 "zoned": false, 00:20:53.028 "supported_io_types": { 00:20:53.028 "read": true, 00:20:53.028 "write": true, 00:20:53.028 "unmap": true, 00:20:53.028 "flush": true, 00:20:53.028 "reset": true, 00:20:53.028 "nvme_admin": false, 00:20:53.028 "nvme_io": false, 00:20:53.028 "nvme_io_md": false, 00:20:53.028 "write_zeroes": true, 00:20:53.028 "zcopy": true, 00:20:53.028 "get_zone_info": false, 00:20:53.028 "zone_management": false, 00:20:53.028 "zone_append": false, 00:20:53.028 "compare": false, 00:20:53.028 "compare_and_write": false, 00:20:53.028 "abort": true, 00:20:53.028 "seek_hole": false, 00:20:53.028 "seek_data": false, 00:20:53.028 "copy": true, 00:20:53.028 "nvme_iov_md": false 00:20:53.028 }, 00:20:53.028 "memory_domains": [ 00:20:53.028 { 00:20:53.028 "dma_device_id": "system", 00:20:53.028 "dma_device_type": 1 00:20:53.028 }, 00:20:53.028 { 00:20:53.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.028 "dma_device_type": 2 00:20:53.028 } 00:20:53.028 ], 00:20:53.028 "driver_specific": {} 00:20:53.028 } 00:20:53.028 ] 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.028 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.029 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.029 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.029 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.029 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.029 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.029 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.029 "name": "Existed_Raid", 00:20:53.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.029 "strip_size_kb": 64, 00:20:53.029 "state": "configuring", 00:20:53.029 "raid_level": "raid5f", 00:20:53.029 "superblock": false, 00:20:53.029 "num_base_bdevs": 4, 00:20:53.029 "num_base_bdevs_discovered": 1, 00:20:53.029 "num_base_bdevs_operational": 4, 00:20:53.029 "base_bdevs_list": [ 00:20:53.029 { 00:20:53.029 "name": "BaseBdev1", 00:20:53.029 "uuid": "9b534127-6634-4c3c-9e5f-ad606d09b54d", 00:20:53.029 "is_configured": true, 00:20:53.029 "data_offset": 0, 00:20:53.029 "data_size": 65536 00:20:53.029 }, 00:20:53.029 { 00:20:53.029 "name": "BaseBdev2", 00:20:53.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.029 "is_configured": false, 00:20:53.029 "data_offset": 0, 00:20:53.029 "data_size": 0 00:20:53.029 }, 00:20:53.029 { 00:20:53.029 "name": "BaseBdev3", 00:20:53.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.029 "is_configured": false, 00:20:53.029 "data_offset": 0, 00:20:53.029 "data_size": 0 00:20:53.029 }, 00:20:53.029 { 00:20:53.029 "name": "BaseBdev4", 00:20:53.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.029 "is_configured": false, 00:20:53.029 "data_offset": 0, 00:20:53.029 "data_size": 0 00:20:53.029 } 00:20:53.029 ] 00:20:53.029 }' 00:20:53.029 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.029 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.597 [2024-10-30 10:49:14.826530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:53.597 [2024-10-30 10:49:14.826608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.597 [2024-10-30 10:49:14.834610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:53.597 [2024-10-30 10:49:14.837112] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:53.597 [2024-10-30 10:49:14.837168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:53.597 [2024-10-30 10:49:14.837185] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:53.597 [2024-10-30 10:49:14.837202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:53.597 [2024-10-30 10:49:14.837212] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:53.597 [2024-10-30 10:49:14.837225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:53.597 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.598 "name": "Existed_Raid", 00:20:53.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.598 "strip_size_kb": 64, 00:20:53.598 "state": "configuring", 00:20:53.598 "raid_level": "raid5f", 00:20:53.598 "superblock": false, 00:20:53.598 "num_base_bdevs": 4, 00:20:53.598 "num_base_bdevs_discovered": 1, 00:20:53.598 "num_base_bdevs_operational": 4, 00:20:53.598 "base_bdevs_list": [ 00:20:53.598 { 00:20:53.598 "name": "BaseBdev1", 00:20:53.598 "uuid": "9b534127-6634-4c3c-9e5f-ad606d09b54d", 00:20:53.598 "is_configured": true, 00:20:53.598 "data_offset": 0, 00:20:53.598 "data_size": 65536 00:20:53.598 }, 00:20:53.598 { 00:20:53.598 "name": "BaseBdev2", 00:20:53.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.598 "is_configured": false, 00:20:53.598 "data_offset": 0, 00:20:53.598 "data_size": 0 00:20:53.598 }, 00:20:53.598 { 00:20:53.598 "name": "BaseBdev3", 00:20:53.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.598 "is_configured": false, 00:20:53.598 "data_offset": 0, 00:20:53.598 "data_size": 0 00:20:53.598 }, 00:20:53.598 { 00:20:53.598 "name": "BaseBdev4", 00:20:53.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.598 "is_configured": false, 00:20:53.598 "data_offset": 0, 00:20:53.598 "data_size": 0 00:20:53.598 } 00:20:53.598 ] 00:20:53.598 }' 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.598 10:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.857 [2024-10-30 10:49:15.317841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:53.857 BaseBdev2 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.857 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.117 [ 00:20:54.117 { 00:20:54.117 "name": "BaseBdev2", 00:20:54.117 "aliases": [ 00:20:54.117 "fd51e12c-0dfc-41aa-be6f-aedc02273109" 00:20:54.117 ], 00:20:54.117 "product_name": "Malloc disk", 00:20:54.117 "block_size": 512, 00:20:54.117 "num_blocks": 65536, 00:20:54.117 "uuid": "fd51e12c-0dfc-41aa-be6f-aedc02273109", 00:20:54.117 "assigned_rate_limits": { 00:20:54.117 "rw_ios_per_sec": 0, 00:20:54.117 "rw_mbytes_per_sec": 0, 00:20:54.117 "r_mbytes_per_sec": 0, 00:20:54.117 "w_mbytes_per_sec": 0 00:20:54.117 }, 00:20:54.117 "claimed": true, 00:20:54.117 "claim_type": "exclusive_write", 00:20:54.117 "zoned": false, 00:20:54.117 "supported_io_types": { 00:20:54.117 "read": true, 00:20:54.117 "write": true, 00:20:54.117 "unmap": true, 00:20:54.117 "flush": true, 00:20:54.117 "reset": true, 00:20:54.117 "nvme_admin": false, 00:20:54.117 "nvme_io": false, 00:20:54.117 "nvme_io_md": false, 00:20:54.117 "write_zeroes": true, 00:20:54.117 "zcopy": true, 00:20:54.117 "get_zone_info": false, 00:20:54.117 "zone_management": false, 00:20:54.117 "zone_append": false, 00:20:54.117 "compare": false, 00:20:54.117 "compare_and_write": false, 00:20:54.117 "abort": true, 00:20:54.117 "seek_hole": false, 00:20:54.117 "seek_data": false, 00:20:54.117 "copy": true, 00:20:54.117 "nvme_iov_md": false 00:20:54.117 }, 00:20:54.117 "memory_domains": [ 00:20:54.117 { 00:20:54.117 "dma_device_id": "system", 00:20:54.117 "dma_device_type": 1 00:20:54.117 }, 00:20:54.117 { 00:20:54.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.117 "dma_device_type": 2 00:20:54.117 } 00:20:54.117 ], 00:20:54.117 "driver_specific": {} 00:20:54.117 } 00:20:54.117 ] 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.117 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.118 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.118 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.118 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.118 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.118 "name": "Existed_Raid", 00:20:54.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.118 "strip_size_kb": 64, 00:20:54.118 "state": "configuring", 00:20:54.118 "raid_level": "raid5f", 00:20:54.118 "superblock": false, 00:20:54.118 "num_base_bdevs": 4, 00:20:54.118 "num_base_bdevs_discovered": 2, 00:20:54.118 "num_base_bdevs_operational": 4, 00:20:54.118 "base_bdevs_list": [ 00:20:54.118 { 00:20:54.118 "name": "BaseBdev1", 00:20:54.118 "uuid": "9b534127-6634-4c3c-9e5f-ad606d09b54d", 00:20:54.118 "is_configured": true, 00:20:54.118 "data_offset": 0, 00:20:54.118 "data_size": 65536 00:20:54.118 }, 00:20:54.118 { 00:20:54.118 "name": "BaseBdev2", 00:20:54.118 "uuid": "fd51e12c-0dfc-41aa-be6f-aedc02273109", 00:20:54.118 "is_configured": true, 00:20:54.118 "data_offset": 0, 00:20:54.118 "data_size": 65536 00:20:54.118 }, 00:20:54.118 { 00:20:54.118 "name": "BaseBdev3", 00:20:54.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.118 "is_configured": false, 00:20:54.118 "data_offset": 0, 00:20:54.118 "data_size": 0 00:20:54.118 }, 00:20:54.118 { 00:20:54.118 "name": "BaseBdev4", 00:20:54.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.118 "is_configured": false, 00:20:54.118 "data_offset": 0, 00:20:54.118 "data_size": 0 00:20:54.118 } 00:20:54.118 ] 00:20:54.118 }' 00:20:54.118 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.118 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.686 [2024-10-30 10:49:15.920167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:54.686 BaseBdev3 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.686 [ 00:20:54.686 { 00:20:54.686 "name": "BaseBdev3", 00:20:54.686 "aliases": [ 00:20:54.686 "e46b7bfa-e945-4783-ac73-854f014bf5d3" 00:20:54.686 ], 00:20:54.686 "product_name": "Malloc disk", 00:20:54.686 "block_size": 512, 00:20:54.686 "num_blocks": 65536, 00:20:54.686 "uuid": "e46b7bfa-e945-4783-ac73-854f014bf5d3", 00:20:54.686 "assigned_rate_limits": { 00:20:54.686 "rw_ios_per_sec": 0, 00:20:54.686 "rw_mbytes_per_sec": 0, 00:20:54.686 "r_mbytes_per_sec": 0, 00:20:54.686 "w_mbytes_per_sec": 0 00:20:54.686 }, 00:20:54.686 "claimed": true, 00:20:54.686 "claim_type": "exclusive_write", 00:20:54.686 "zoned": false, 00:20:54.686 "supported_io_types": { 00:20:54.686 "read": true, 00:20:54.686 "write": true, 00:20:54.686 "unmap": true, 00:20:54.686 "flush": true, 00:20:54.686 "reset": true, 00:20:54.686 "nvme_admin": false, 00:20:54.686 "nvme_io": false, 00:20:54.686 "nvme_io_md": false, 00:20:54.686 "write_zeroes": true, 00:20:54.686 "zcopy": true, 00:20:54.686 "get_zone_info": false, 00:20:54.686 "zone_management": false, 00:20:54.686 "zone_append": false, 00:20:54.686 "compare": false, 00:20:54.686 "compare_and_write": false, 00:20:54.686 "abort": true, 00:20:54.686 "seek_hole": false, 00:20:54.686 "seek_data": false, 00:20:54.686 "copy": true, 00:20:54.686 "nvme_iov_md": false 00:20:54.686 }, 00:20:54.686 "memory_domains": [ 00:20:54.686 { 00:20:54.686 "dma_device_id": "system", 00:20:54.686 "dma_device_type": 1 00:20:54.686 }, 00:20:54.686 { 00:20:54.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.686 "dma_device_type": 2 00:20:54.686 } 00:20:54.686 ], 00:20:54.686 "driver_specific": {} 00:20:54.686 } 00:20:54.686 ] 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.686 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.687 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.687 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.687 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.687 10:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.687 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.687 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.687 10:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.687 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.687 "name": "Existed_Raid", 00:20:54.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.687 "strip_size_kb": 64, 00:20:54.687 "state": "configuring", 00:20:54.687 "raid_level": "raid5f", 00:20:54.687 "superblock": false, 00:20:54.687 "num_base_bdevs": 4, 00:20:54.687 "num_base_bdevs_discovered": 3, 00:20:54.687 "num_base_bdevs_operational": 4, 00:20:54.687 "base_bdevs_list": [ 00:20:54.687 { 00:20:54.687 "name": "BaseBdev1", 00:20:54.687 "uuid": "9b534127-6634-4c3c-9e5f-ad606d09b54d", 00:20:54.687 "is_configured": true, 00:20:54.687 "data_offset": 0, 00:20:54.687 "data_size": 65536 00:20:54.687 }, 00:20:54.687 { 00:20:54.687 "name": "BaseBdev2", 00:20:54.687 "uuid": "fd51e12c-0dfc-41aa-be6f-aedc02273109", 00:20:54.687 "is_configured": true, 00:20:54.687 "data_offset": 0, 00:20:54.687 "data_size": 65536 00:20:54.687 }, 00:20:54.687 { 00:20:54.687 "name": "BaseBdev3", 00:20:54.687 "uuid": "e46b7bfa-e945-4783-ac73-854f014bf5d3", 00:20:54.687 "is_configured": true, 00:20:54.687 "data_offset": 0, 00:20:54.687 "data_size": 65536 00:20:54.687 }, 00:20:54.687 { 00:20:54.687 "name": "BaseBdev4", 00:20:54.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.687 "is_configured": false, 00:20:54.687 "data_offset": 0, 00:20:54.687 "data_size": 0 00:20:54.687 } 00:20:54.687 ] 00:20:54.687 }' 00:20:54.687 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.687 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 [2024-10-30 10:49:16.532451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:55.255 [2024-10-30 10:49:16.532595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:55.255 [2024-10-30 10:49:16.532612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:55.255 [2024-10-30 10:49:16.533017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:55.255 [2024-10-30 10:49:16.539747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:55.255 [2024-10-30 10:49:16.539796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:55.255 [2024-10-30 10:49:16.540213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.255 BaseBdev4 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.255 [ 00:20:55.255 { 00:20:55.255 "name": "BaseBdev4", 00:20:55.255 "aliases": [ 00:20:55.255 "ac31d1b5-6559-4b12-acc1-a457a1781e24" 00:20:55.255 ], 00:20:55.255 "product_name": "Malloc disk", 00:20:55.255 "block_size": 512, 00:20:55.255 "num_blocks": 65536, 00:20:55.255 "uuid": "ac31d1b5-6559-4b12-acc1-a457a1781e24", 00:20:55.255 "assigned_rate_limits": { 00:20:55.255 "rw_ios_per_sec": 0, 00:20:55.255 "rw_mbytes_per_sec": 0, 00:20:55.255 "r_mbytes_per_sec": 0, 00:20:55.255 "w_mbytes_per_sec": 0 00:20:55.255 }, 00:20:55.255 "claimed": true, 00:20:55.255 "claim_type": "exclusive_write", 00:20:55.255 "zoned": false, 00:20:55.255 "supported_io_types": { 00:20:55.255 "read": true, 00:20:55.255 "write": true, 00:20:55.255 "unmap": true, 00:20:55.255 "flush": true, 00:20:55.255 "reset": true, 00:20:55.255 "nvme_admin": false, 00:20:55.255 "nvme_io": false, 00:20:55.255 "nvme_io_md": false, 00:20:55.255 "write_zeroes": true, 00:20:55.255 "zcopy": true, 00:20:55.255 "get_zone_info": false, 00:20:55.255 "zone_management": false, 00:20:55.255 "zone_append": false, 00:20:55.255 "compare": false, 00:20:55.255 "compare_and_write": false, 00:20:55.255 "abort": true, 00:20:55.255 "seek_hole": false, 00:20:55.255 "seek_data": false, 00:20:55.255 "copy": true, 00:20:55.255 "nvme_iov_md": false 00:20:55.255 }, 00:20:55.255 "memory_domains": [ 00:20:55.255 { 00:20:55.255 "dma_device_id": "system", 00:20:55.255 "dma_device_type": 1 00:20:55.255 }, 00:20:55.255 { 00:20:55.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.255 "dma_device_type": 2 00:20:55.255 } 00:20:55.255 ], 00:20:55.255 "driver_specific": {} 00:20:55.255 } 00:20:55.255 ] 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:55.255 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.256 "name": "Existed_Raid", 00:20:55.256 "uuid": "be8cfbb2-0e24-44e6-be6d-065f6c690aeb", 00:20:55.256 "strip_size_kb": 64, 00:20:55.256 "state": "online", 00:20:55.256 "raid_level": "raid5f", 00:20:55.256 "superblock": false, 00:20:55.256 "num_base_bdevs": 4, 00:20:55.256 "num_base_bdevs_discovered": 4, 00:20:55.256 "num_base_bdevs_operational": 4, 00:20:55.256 "base_bdevs_list": [ 00:20:55.256 { 00:20:55.256 "name": "BaseBdev1", 00:20:55.256 "uuid": "9b534127-6634-4c3c-9e5f-ad606d09b54d", 00:20:55.256 "is_configured": true, 00:20:55.256 "data_offset": 0, 00:20:55.256 "data_size": 65536 00:20:55.256 }, 00:20:55.256 { 00:20:55.256 "name": "BaseBdev2", 00:20:55.256 "uuid": "fd51e12c-0dfc-41aa-be6f-aedc02273109", 00:20:55.256 "is_configured": true, 00:20:55.256 "data_offset": 0, 00:20:55.256 "data_size": 65536 00:20:55.256 }, 00:20:55.256 { 00:20:55.256 "name": "BaseBdev3", 00:20:55.256 "uuid": "e46b7bfa-e945-4783-ac73-854f014bf5d3", 00:20:55.256 "is_configured": true, 00:20:55.256 "data_offset": 0, 00:20:55.256 "data_size": 65536 00:20:55.256 }, 00:20:55.256 { 00:20:55.256 "name": "BaseBdev4", 00:20:55.256 "uuid": "ac31d1b5-6559-4b12-acc1-a457a1781e24", 00:20:55.256 "is_configured": true, 00:20:55.256 "data_offset": 0, 00:20:55.256 "data_size": 65536 00:20:55.256 } 00:20:55.256 ] 00:20:55.256 }' 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.256 10:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.824 [2024-10-30 10:49:17.120109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:55.824 "name": "Existed_Raid", 00:20:55.824 "aliases": [ 00:20:55.824 "be8cfbb2-0e24-44e6-be6d-065f6c690aeb" 00:20:55.824 ], 00:20:55.824 "product_name": "Raid Volume", 00:20:55.824 "block_size": 512, 00:20:55.824 "num_blocks": 196608, 00:20:55.824 "uuid": "be8cfbb2-0e24-44e6-be6d-065f6c690aeb", 00:20:55.824 "assigned_rate_limits": { 00:20:55.824 "rw_ios_per_sec": 0, 00:20:55.824 "rw_mbytes_per_sec": 0, 00:20:55.824 "r_mbytes_per_sec": 0, 00:20:55.824 "w_mbytes_per_sec": 0 00:20:55.824 }, 00:20:55.824 "claimed": false, 00:20:55.824 "zoned": false, 00:20:55.824 "supported_io_types": { 00:20:55.824 "read": true, 00:20:55.824 "write": true, 00:20:55.824 "unmap": false, 00:20:55.824 "flush": false, 00:20:55.824 "reset": true, 00:20:55.824 "nvme_admin": false, 00:20:55.824 "nvme_io": false, 00:20:55.824 "nvme_io_md": false, 00:20:55.824 "write_zeroes": true, 00:20:55.824 "zcopy": false, 00:20:55.824 "get_zone_info": false, 00:20:55.824 "zone_management": false, 00:20:55.824 "zone_append": false, 00:20:55.824 "compare": false, 00:20:55.824 "compare_and_write": false, 00:20:55.824 "abort": false, 00:20:55.824 "seek_hole": false, 00:20:55.824 "seek_data": false, 00:20:55.824 "copy": false, 00:20:55.824 "nvme_iov_md": false 00:20:55.824 }, 00:20:55.824 "driver_specific": { 00:20:55.824 "raid": { 00:20:55.824 "uuid": "be8cfbb2-0e24-44e6-be6d-065f6c690aeb", 00:20:55.824 "strip_size_kb": 64, 00:20:55.824 "state": "online", 00:20:55.824 "raid_level": "raid5f", 00:20:55.824 "superblock": false, 00:20:55.824 "num_base_bdevs": 4, 00:20:55.824 "num_base_bdevs_discovered": 4, 00:20:55.824 "num_base_bdevs_operational": 4, 00:20:55.824 "base_bdevs_list": [ 00:20:55.824 { 00:20:55.824 "name": "BaseBdev1", 00:20:55.824 "uuid": "9b534127-6634-4c3c-9e5f-ad606d09b54d", 00:20:55.824 "is_configured": true, 00:20:55.824 "data_offset": 0, 00:20:55.824 "data_size": 65536 00:20:55.824 }, 00:20:55.824 { 00:20:55.824 "name": "BaseBdev2", 00:20:55.824 "uuid": "fd51e12c-0dfc-41aa-be6f-aedc02273109", 00:20:55.824 "is_configured": true, 00:20:55.824 "data_offset": 0, 00:20:55.824 "data_size": 65536 00:20:55.824 }, 00:20:55.824 { 00:20:55.824 "name": "BaseBdev3", 00:20:55.824 "uuid": "e46b7bfa-e945-4783-ac73-854f014bf5d3", 00:20:55.824 "is_configured": true, 00:20:55.824 "data_offset": 0, 00:20:55.824 "data_size": 65536 00:20:55.824 }, 00:20:55.824 { 00:20:55.824 "name": "BaseBdev4", 00:20:55.824 "uuid": "ac31d1b5-6559-4b12-acc1-a457a1781e24", 00:20:55.824 "is_configured": true, 00:20:55.824 "data_offset": 0, 00:20:55.824 "data_size": 65536 00:20:55.824 } 00:20:55.824 ] 00:20:55.824 } 00:20:55.824 } 00:20:55.824 }' 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:55.824 BaseBdev2 00:20:55.824 BaseBdev3 00:20:55.824 BaseBdev4' 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:55.824 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.084 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.084 [2024-10-30 10:49:17.488064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.343 "name": "Existed_Raid", 00:20:56.343 "uuid": "be8cfbb2-0e24-44e6-be6d-065f6c690aeb", 00:20:56.343 "strip_size_kb": 64, 00:20:56.343 "state": "online", 00:20:56.343 "raid_level": "raid5f", 00:20:56.343 "superblock": false, 00:20:56.343 "num_base_bdevs": 4, 00:20:56.343 "num_base_bdevs_discovered": 3, 00:20:56.343 "num_base_bdevs_operational": 3, 00:20:56.343 "base_bdevs_list": [ 00:20:56.343 { 00:20:56.343 "name": null, 00:20:56.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.343 "is_configured": false, 00:20:56.343 "data_offset": 0, 00:20:56.343 "data_size": 65536 00:20:56.343 }, 00:20:56.343 { 00:20:56.343 "name": "BaseBdev2", 00:20:56.343 "uuid": "fd51e12c-0dfc-41aa-be6f-aedc02273109", 00:20:56.343 "is_configured": true, 00:20:56.343 "data_offset": 0, 00:20:56.343 "data_size": 65536 00:20:56.343 }, 00:20:56.343 { 00:20:56.343 "name": "BaseBdev3", 00:20:56.343 "uuid": "e46b7bfa-e945-4783-ac73-854f014bf5d3", 00:20:56.343 "is_configured": true, 00:20:56.343 "data_offset": 0, 00:20:56.343 "data_size": 65536 00:20:56.343 }, 00:20:56.343 { 00:20:56.343 "name": "BaseBdev4", 00:20:56.343 "uuid": "ac31d1b5-6559-4b12-acc1-a457a1781e24", 00:20:56.343 "is_configured": true, 00:20:56.343 "data_offset": 0, 00:20:56.343 "data_size": 65536 00:20:56.343 } 00:20:56.343 ] 00:20:56.343 }' 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.343 10:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.912 [2024-10-30 10:49:18.195885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:56.912 [2024-10-30 10:49:18.196034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.912 [2024-10-30 10:49:18.286340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.912 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.912 [2024-10-30 10:49:18.350409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:57.171 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.171 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:57.171 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:57.171 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.171 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.172 [2024-10-30 10:49:18.493214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:57.172 [2024-10-30 10:49:18.493281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.172 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.431 BaseBdev2 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.431 [ 00:20:57.431 { 00:20:57.431 "name": "BaseBdev2", 00:20:57.431 "aliases": [ 00:20:57.431 "82f226b8-42ea-4300-a0f8-25114334b85c" 00:20:57.431 ], 00:20:57.431 "product_name": "Malloc disk", 00:20:57.431 "block_size": 512, 00:20:57.431 "num_blocks": 65536, 00:20:57.431 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:20:57.431 "assigned_rate_limits": { 00:20:57.431 "rw_ios_per_sec": 0, 00:20:57.431 "rw_mbytes_per_sec": 0, 00:20:57.431 "r_mbytes_per_sec": 0, 00:20:57.431 "w_mbytes_per_sec": 0 00:20:57.431 }, 00:20:57.431 "claimed": false, 00:20:57.431 "zoned": false, 00:20:57.431 "supported_io_types": { 00:20:57.431 "read": true, 00:20:57.431 "write": true, 00:20:57.431 "unmap": true, 00:20:57.431 "flush": true, 00:20:57.431 "reset": true, 00:20:57.431 "nvme_admin": false, 00:20:57.431 "nvme_io": false, 00:20:57.431 "nvme_io_md": false, 00:20:57.431 "write_zeroes": true, 00:20:57.431 "zcopy": true, 00:20:57.431 "get_zone_info": false, 00:20:57.431 "zone_management": false, 00:20:57.431 "zone_append": false, 00:20:57.431 "compare": false, 00:20:57.431 "compare_and_write": false, 00:20:57.431 "abort": true, 00:20:57.431 "seek_hole": false, 00:20:57.431 "seek_data": false, 00:20:57.431 "copy": true, 00:20:57.431 "nvme_iov_md": false 00:20:57.431 }, 00:20:57.431 "memory_domains": [ 00:20:57.431 { 00:20:57.431 "dma_device_id": "system", 00:20:57.431 "dma_device_type": 1 00:20:57.431 }, 00:20:57.431 { 00:20:57.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.431 "dma_device_type": 2 00:20:57.431 } 00:20:57.431 ], 00:20:57.431 "driver_specific": {} 00:20:57.431 } 00:20:57.431 ] 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.431 BaseBdev3 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.431 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.432 [ 00:20:57.432 { 00:20:57.432 "name": "BaseBdev3", 00:20:57.432 "aliases": [ 00:20:57.432 "2961d244-ebb4-484f-a823-49eac82feee9" 00:20:57.432 ], 00:20:57.432 "product_name": "Malloc disk", 00:20:57.432 "block_size": 512, 00:20:57.432 "num_blocks": 65536, 00:20:57.432 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:20:57.432 "assigned_rate_limits": { 00:20:57.432 "rw_ios_per_sec": 0, 00:20:57.432 "rw_mbytes_per_sec": 0, 00:20:57.432 "r_mbytes_per_sec": 0, 00:20:57.432 "w_mbytes_per_sec": 0 00:20:57.432 }, 00:20:57.432 "claimed": false, 00:20:57.432 "zoned": false, 00:20:57.432 "supported_io_types": { 00:20:57.432 "read": true, 00:20:57.432 "write": true, 00:20:57.432 "unmap": true, 00:20:57.432 "flush": true, 00:20:57.432 "reset": true, 00:20:57.432 "nvme_admin": false, 00:20:57.432 "nvme_io": false, 00:20:57.432 "nvme_io_md": false, 00:20:57.432 "write_zeroes": true, 00:20:57.432 "zcopy": true, 00:20:57.432 "get_zone_info": false, 00:20:57.432 "zone_management": false, 00:20:57.432 "zone_append": false, 00:20:57.432 "compare": false, 00:20:57.432 "compare_and_write": false, 00:20:57.432 "abort": true, 00:20:57.432 "seek_hole": false, 00:20:57.432 "seek_data": false, 00:20:57.432 "copy": true, 00:20:57.432 "nvme_iov_md": false 00:20:57.432 }, 00:20:57.432 "memory_domains": [ 00:20:57.432 { 00:20:57.432 "dma_device_id": "system", 00:20:57.432 "dma_device_type": 1 00:20:57.432 }, 00:20:57.432 { 00:20:57.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.432 "dma_device_type": 2 00:20:57.432 } 00:20:57.432 ], 00:20:57.432 "driver_specific": {} 00:20:57.432 } 00:20:57.432 ] 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.432 BaseBdev4 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.432 [ 00:20:57.432 { 00:20:57.432 "name": "BaseBdev4", 00:20:57.432 "aliases": [ 00:20:57.432 "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31" 00:20:57.432 ], 00:20:57.432 "product_name": "Malloc disk", 00:20:57.432 "block_size": 512, 00:20:57.432 "num_blocks": 65536, 00:20:57.432 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:20:57.432 "assigned_rate_limits": { 00:20:57.432 "rw_ios_per_sec": 0, 00:20:57.432 "rw_mbytes_per_sec": 0, 00:20:57.432 "r_mbytes_per_sec": 0, 00:20:57.432 "w_mbytes_per_sec": 0 00:20:57.432 }, 00:20:57.432 "claimed": false, 00:20:57.432 "zoned": false, 00:20:57.432 "supported_io_types": { 00:20:57.432 "read": true, 00:20:57.432 "write": true, 00:20:57.432 "unmap": true, 00:20:57.432 "flush": true, 00:20:57.432 "reset": true, 00:20:57.432 "nvme_admin": false, 00:20:57.432 "nvme_io": false, 00:20:57.432 "nvme_io_md": false, 00:20:57.432 "write_zeroes": true, 00:20:57.432 "zcopy": true, 00:20:57.432 "get_zone_info": false, 00:20:57.432 "zone_management": false, 00:20:57.432 "zone_append": false, 00:20:57.432 "compare": false, 00:20:57.432 "compare_and_write": false, 00:20:57.432 "abort": true, 00:20:57.432 "seek_hole": false, 00:20:57.432 "seek_data": false, 00:20:57.432 "copy": true, 00:20:57.432 "nvme_iov_md": false 00:20:57.432 }, 00:20:57.432 "memory_domains": [ 00:20:57.432 { 00:20:57.432 "dma_device_id": "system", 00:20:57.432 "dma_device_type": 1 00:20:57.432 }, 00:20:57.432 { 00:20:57.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.432 "dma_device_type": 2 00:20:57.432 } 00:20:57.432 ], 00:20:57.432 "driver_specific": {} 00:20:57.432 } 00:20:57.432 ] 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.432 [2024-10-30 10:49:18.848303] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:57.432 [2024-10-30 10:49:18.848390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:57.432 [2024-10-30 10:49:18.848434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:57.432 [2024-10-30 10:49:18.850767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:57.432 [2024-10-30 10:49:18.850859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:57.432 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.433 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.692 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.692 "name": "Existed_Raid", 00:20:57.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.692 "strip_size_kb": 64, 00:20:57.692 "state": "configuring", 00:20:57.692 "raid_level": "raid5f", 00:20:57.692 "superblock": false, 00:20:57.692 "num_base_bdevs": 4, 00:20:57.692 "num_base_bdevs_discovered": 3, 00:20:57.692 "num_base_bdevs_operational": 4, 00:20:57.692 "base_bdevs_list": [ 00:20:57.692 { 00:20:57.692 "name": "BaseBdev1", 00:20:57.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.692 "is_configured": false, 00:20:57.692 "data_offset": 0, 00:20:57.692 "data_size": 0 00:20:57.692 }, 00:20:57.692 { 00:20:57.692 "name": "BaseBdev2", 00:20:57.692 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:20:57.692 "is_configured": true, 00:20:57.692 "data_offset": 0, 00:20:57.692 "data_size": 65536 00:20:57.692 }, 00:20:57.692 { 00:20:57.692 "name": "BaseBdev3", 00:20:57.692 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:20:57.692 "is_configured": true, 00:20:57.692 "data_offset": 0, 00:20:57.692 "data_size": 65536 00:20:57.692 }, 00:20:57.692 { 00:20:57.692 "name": "BaseBdev4", 00:20:57.692 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:20:57.692 "is_configured": true, 00:20:57.692 "data_offset": 0, 00:20:57.692 "data_size": 65536 00:20:57.692 } 00:20:57.692 ] 00:20:57.692 }' 00:20:57.692 10:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.692 10:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.951 [2024-10-30 10:49:19.388480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.951 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.210 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.210 "name": "Existed_Raid", 00:20:58.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.210 "strip_size_kb": 64, 00:20:58.210 "state": "configuring", 00:20:58.210 "raid_level": "raid5f", 00:20:58.210 "superblock": false, 00:20:58.210 "num_base_bdevs": 4, 00:20:58.210 "num_base_bdevs_discovered": 2, 00:20:58.210 "num_base_bdevs_operational": 4, 00:20:58.210 "base_bdevs_list": [ 00:20:58.210 { 00:20:58.210 "name": "BaseBdev1", 00:20:58.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.210 "is_configured": false, 00:20:58.210 "data_offset": 0, 00:20:58.210 "data_size": 0 00:20:58.210 }, 00:20:58.210 { 00:20:58.210 "name": null, 00:20:58.210 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:20:58.210 "is_configured": false, 00:20:58.210 "data_offset": 0, 00:20:58.210 "data_size": 65536 00:20:58.210 }, 00:20:58.210 { 00:20:58.210 "name": "BaseBdev3", 00:20:58.210 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:20:58.210 "is_configured": true, 00:20:58.210 "data_offset": 0, 00:20:58.210 "data_size": 65536 00:20:58.210 }, 00:20:58.210 { 00:20:58.210 "name": "BaseBdev4", 00:20:58.210 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:20:58.211 "is_configured": true, 00:20:58.211 "data_offset": 0, 00:20:58.211 "data_size": 65536 00:20:58.211 } 00:20:58.211 ] 00:20:58.211 }' 00:20:58.211 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.211 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.468 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.468 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.468 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.468 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:58.468 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.726 [2024-10-30 10:49:19.986567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:58.726 BaseBdev1 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.726 10:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.726 [ 00:20:58.726 { 00:20:58.726 "name": "BaseBdev1", 00:20:58.726 "aliases": [ 00:20:58.726 "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e" 00:20:58.726 ], 00:20:58.726 "product_name": "Malloc disk", 00:20:58.726 "block_size": 512, 00:20:58.726 "num_blocks": 65536, 00:20:58.726 "uuid": "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e", 00:20:58.726 "assigned_rate_limits": { 00:20:58.726 "rw_ios_per_sec": 0, 00:20:58.726 "rw_mbytes_per_sec": 0, 00:20:58.726 "r_mbytes_per_sec": 0, 00:20:58.726 "w_mbytes_per_sec": 0 00:20:58.726 }, 00:20:58.726 "claimed": true, 00:20:58.726 "claim_type": "exclusive_write", 00:20:58.726 "zoned": false, 00:20:58.726 "supported_io_types": { 00:20:58.726 "read": true, 00:20:58.726 "write": true, 00:20:58.726 "unmap": true, 00:20:58.726 "flush": true, 00:20:58.726 "reset": true, 00:20:58.726 "nvme_admin": false, 00:20:58.726 "nvme_io": false, 00:20:58.726 "nvme_io_md": false, 00:20:58.726 "write_zeroes": true, 00:20:58.726 "zcopy": true, 00:20:58.726 "get_zone_info": false, 00:20:58.726 "zone_management": false, 00:20:58.726 "zone_append": false, 00:20:58.726 "compare": false, 00:20:58.726 "compare_and_write": false, 00:20:58.726 "abort": true, 00:20:58.726 "seek_hole": false, 00:20:58.726 "seek_data": false, 00:20:58.726 "copy": true, 00:20:58.726 "nvme_iov_md": false 00:20:58.726 }, 00:20:58.726 "memory_domains": [ 00:20:58.726 { 00:20:58.726 "dma_device_id": "system", 00:20:58.726 "dma_device_type": 1 00:20:58.726 }, 00:20:58.726 { 00:20:58.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.726 "dma_device_type": 2 00:20:58.726 } 00:20:58.726 ], 00:20:58.726 "driver_specific": {} 00:20:58.726 } 00:20:58.726 ] 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.726 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.727 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.727 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.727 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.727 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.727 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.727 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.727 "name": "Existed_Raid", 00:20:58.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.727 "strip_size_kb": 64, 00:20:58.727 "state": "configuring", 00:20:58.727 "raid_level": "raid5f", 00:20:58.727 "superblock": false, 00:20:58.727 "num_base_bdevs": 4, 00:20:58.727 "num_base_bdevs_discovered": 3, 00:20:58.727 "num_base_bdevs_operational": 4, 00:20:58.727 "base_bdevs_list": [ 00:20:58.727 { 00:20:58.727 "name": "BaseBdev1", 00:20:58.727 "uuid": "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e", 00:20:58.727 "is_configured": true, 00:20:58.727 "data_offset": 0, 00:20:58.727 "data_size": 65536 00:20:58.727 }, 00:20:58.727 { 00:20:58.727 "name": null, 00:20:58.727 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:20:58.727 "is_configured": false, 00:20:58.727 "data_offset": 0, 00:20:58.727 "data_size": 65536 00:20:58.727 }, 00:20:58.727 { 00:20:58.727 "name": "BaseBdev3", 00:20:58.727 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:20:58.727 "is_configured": true, 00:20:58.727 "data_offset": 0, 00:20:58.727 "data_size": 65536 00:20:58.727 }, 00:20:58.727 { 00:20:58.727 "name": "BaseBdev4", 00:20:58.727 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:20:58.727 "is_configured": true, 00:20:58.727 "data_offset": 0, 00:20:58.727 "data_size": 65536 00:20:58.727 } 00:20:58.727 ] 00:20:58.727 }' 00:20:58.727 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.727 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.293 [2024-10-30 10:49:20.586828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.293 "name": "Existed_Raid", 00:20:59.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.293 "strip_size_kb": 64, 00:20:59.293 "state": "configuring", 00:20:59.293 "raid_level": "raid5f", 00:20:59.293 "superblock": false, 00:20:59.293 "num_base_bdevs": 4, 00:20:59.293 "num_base_bdevs_discovered": 2, 00:20:59.293 "num_base_bdevs_operational": 4, 00:20:59.293 "base_bdevs_list": [ 00:20:59.293 { 00:20:59.293 "name": "BaseBdev1", 00:20:59.293 "uuid": "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e", 00:20:59.293 "is_configured": true, 00:20:59.293 "data_offset": 0, 00:20:59.293 "data_size": 65536 00:20:59.293 }, 00:20:59.293 { 00:20:59.293 "name": null, 00:20:59.293 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:20:59.293 "is_configured": false, 00:20:59.293 "data_offset": 0, 00:20:59.293 "data_size": 65536 00:20:59.293 }, 00:20:59.293 { 00:20:59.293 "name": null, 00:20:59.293 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:20:59.293 "is_configured": false, 00:20:59.293 "data_offset": 0, 00:20:59.293 "data_size": 65536 00:20:59.293 }, 00:20:59.293 { 00:20:59.293 "name": "BaseBdev4", 00:20:59.293 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:20:59.293 "is_configured": true, 00:20:59.293 "data_offset": 0, 00:20:59.293 "data_size": 65536 00:20:59.293 } 00:20:59.293 ] 00:20:59.293 }' 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.293 10:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.860 [2024-10-30 10:49:21.178948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.860 "name": "Existed_Raid", 00:20:59.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.860 "strip_size_kb": 64, 00:20:59.860 "state": "configuring", 00:20:59.860 "raid_level": "raid5f", 00:20:59.860 "superblock": false, 00:20:59.860 "num_base_bdevs": 4, 00:20:59.860 "num_base_bdevs_discovered": 3, 00:20:59.860 "num_base_bdevs_operational": 4, 00:20:59.860 "base_bdevs_list": [ 00:20:59.860 { 00:20:59.860 "name": "BaseBdev1", 00:20:59.860 "uuid": "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e", 00:20:59.860 "is_configured": true, 00:20:59.860 "data_offset": 0, 00:20:59.860 "data_size": 65536 00:20:59.860 }, 00:20:59.860 { 00:20:59.860 "name": null, 00:20:59.860 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:20:59.860 "is_configured": false, 00:20:59.860 "data_offset": 0, 00:20:59.860 "data_size": 65536 00:20:59.860 }, 00:20:59.860 { 00:20:59.860 "name": "BaseBdev3", 00:20:59.860 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:20:59.860 "is_configured": true, 00:20:59.860 "data_offset": 0, 00:20:59.860 "data_size": 65536 00:20:59.860 }, 00:20:59.860 { 00:20:59.860 "name": "BaseBdev4", 00:20:59.860 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:20:59.860 "is_configured": true, 00:20:59.860 "data_offset": 0, 00:20:59.860 "data_size": 65536 00:20:59.860 } 00:20:59.860 ] 00:20:59.860 }' 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.860 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.427 [2024-10-30 10:49:21.747275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.427 "name": "Existed_Raid", 00:21:00.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.427 "strip_size_kb": 64, 00:21:00.427 "state": "configuring", 00:21:00.427 "raid_level": "raid5f", 00:21:00.427 "superblock": false, 00:21:00.427 "num_base_bdevs": 4, 00:21:00.427 "num_base_bdevs_discovered": 2, 00:21:00.427 "num_base_bdevs_operational": 4, 00:21:00.427 "base_bdevs_list": [ 00:21:00.427 { 00:21:00.427 "name": null, 00:21:00.427 "uuid": "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e", 00:21:00.427 "is_configured": false, 00:21:00.427 "data_offset": 0, 00:21:00.427 "data_size": 65536 00:21:00.427 }, 00:21:00.427 { 00:21:00.427 "name": null, 00:21:00.427 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:21:00.427 "is_configured": false, 00:21:00.427 "data_offset": 0, 00:21:00.427 "data_size": 65536 00:21:00.427 }, 00:21:00.427 { 00:21:00.427 "name": "BaseBdev3", 00:21:00.427 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:21:00.427 "is_configured": true, 00:21:00.427 "data_offset": 0, 00:21:00.427 "data_size": 65536 00:21:00.427 }, 00:21:00.427 { 00:21:00.427 "name": "BaseBdev4", 00:21:00.427 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:21:00.427 "is_configured": true, 00:21:00.427 "data_offset": 0, 00:21:00.427 "data_size": 65536 00:21:00.427 } 00:21:00.427 ] 00:21:00.427 }' 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.427 10:49:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.020 [2024-10-30 10:49:22.398869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.020 "name": "Existed_Raid", 00:21:01.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.020 "strip_size_kb": 64, 00:21:01.020 "state": "configuring", 00:21:01.020 "raid_level": "raid5f", 00:21:01.020 "superblock": false, 00:21:01.020 "num_base_bdevs": 4, 00:21:01.020 "num_base_bdevs_discovered": 3, 00:21:01.020 "num_base_bdevs_operational": 4, 00:21:01.020 "base_bdevs_list": [ 00:21:01.020 { 00:21:01.020 "name": null, 00:21:01.020 "uuid": "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e", 00:21:01.020 "is_configured": false, 00:21:01.020 "data_offset": 0, 00:21:01.020 "data_size": 65536 00:21:01.020 }, 00:21:01.020 { 00:21:01.020 "name": "BaseBdev2", 00:21:01.020 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:21:01.020 "is_configured": true, 00:21:01.020 "data_offset": 0, 00:21:01.020 "data_size": 65536 00:21:01.020 }, 00:21:01.020 { 00:21:01.020 "name": "BaseBdev3", 00:21:01.020 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:21:01.020 "is_configured": true, 00:21:01.020 "data_offset": 0, 00:21:01.020 "data_size": 65536 00:21:01.020 }, 00:21:01.020 { 00:21:01.020 "name": "BaseBdev4", 00:21:01.020 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:21:01.020 "is_configured": true, 00:21:01.020 "data_offset": 0, 00:21:01.020 "data_size": 65536 00:21:01.020 } 00:21:01.020 ] 00:21:01.020 }' 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.020 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:01.588 10:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.588 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1e5cfd50-0f97-4527-a0c1-e05719d3ab8e 00:21:01.588 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.588 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.847 [2024-10-30 10:49:23.073439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:01.847 [2024-10-30 10:49:23.073521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:01.847 [2024-10-30 10:49:23.073549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:01.847 [2024-10-30 10:49:23.073886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:01.847 [2024-10-30 10:49:23.080051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:01.847 [2024-10-30 10:49:23.080099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:01.847 [2024-10-30 10:49:23.080442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.847 NewBaseBdev 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.847 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.847 [ 00:21:01.847 { 00:21:01.847 "name": "NewBaseBdev", 00:21:01.847 "aliases": [ 00:21:01.847 "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e" 00:21:01.847 ], 00:21:01.847 "product_name": "Malloc disk", 00:21:01.847 "block_size": 512, 00:21:01.847 "num_blocks": 65536, 00:21:01.847 "uuid": "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e", 00:21:01.847 "assigned_rate_limits": { 00:21:01.847 "rw_ios_per_sec": 0, 00:21:01.847 "rw_mbytes_per_sec": 0, 00:21:01.847 "r_mbytes_per_sec": 0, 00:21:01.847 "w_mbytes_per_sec": 0 00:21:01.847 }, 00:21:01.847 "claimed": true, 00:21:01.847 "claim_type": "exclusive_write", 00:21:01.848 "zoned": false, 00:21:01.848 "supported_io_types": { 00:21:01.848 "read": true, 00:21:01.848 "write": true, 00:21:01.848 "unmap": true, 00:21:01.848 "flush": true, 00:21:01.848 "reset": true, 00:21:01.848 "nvme_admin": false, 00:21:01.848 "nvme_io": false, 00:21:01.848 "nvme_io_md": false, 00:21:01.848 "write_zeroes": true, 00:21:01.848 "zcopy": true, 00:21:01.848 "get_zone_info": false, 00:21:01.848 "zone_management": false, 00:21:01.848 "zone_append": false, 00:21:01.848 "compare": false, 00:21:01.848 "compare_and_write": false, 00:21:01.848 "abort": true, 00:21:01.848 "seek_hole": false, 00:21:01.848 "seek_data": false, 00:21:01.848 "copy": true, 00:21:01.848 "nvme_iov_md": false 00:21:01.848 }, 00:21:01.848 "memory_domains": [ 00:21:01.848 { 00:21:01.848 "dma_device_id": "system", 00:21:01.848 "dma_device_type": 1 00:21:01.848 }, 00:21:01.848 { 00:21:01.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.848 "dma_device_type": 2 00:21:01.848 } 00:21:01.848 ], 00:21:01.848 "driver_specific": {} 00:21:01.848 } 00:21:01.848 ] 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.848 "name": "Existed_Raid", 00:21:01.848 "uuid": "990d9019-4fe3-4c41-b8c3-89400aa73bbf", 00:21:01.848 "strip_size_kb": 64, 00:21:01.848 "state": "online", 00:21:01.848 "raid_level": "raid5f", 00:21:01.848 "superblock": false, 00:21:01.848 "num_base_bdevs": 4, 00:21:01.848 "num_base_bdevs_discovered": 4, 00:21:01.848 "num_base_bdevs_operational": 4, 00:21:01.848 "base_bdevs_list": [ 00:21:01.848 { 00:21:01.848 "name": "NewBaseBdev", 00:21:01.848 "uuid": "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e", 00:21:01.848 "is_configured": true, 00:21:01.848 "data_offset": 0, 00:21:01.848 "data_size": 65536 00:21:01.848 }, 00:21:01.848 { 00:21:01.848 "name": "BaseBdev2", 00:21:01.848 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:21:01.848 "is_configured": true, 00:21:01.848 "data_offset": 0, 00:21:01.848 "data_size": 65536 00:21:01.848 }, 00:21:01.848 { 00:21:01.848 "name": "BaseBdev3", 00:21:01.848 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:21:01.848 "is_configured": true, 00:21:01.848 "data_offset": 0, 00:21:01.848 "data_size": 65536 00:21:01.848 }, 00:21:01.848 { 00:21:01.848 "name": "BaseBdev4", 00:21:01.848 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:21:01.848 "is_configured": true, 00:21:01.848 "data_offset": 0, 00:21:01.848 "data_size": 65536 00:21:01.848 } 00:21:01.848 ] 00:21:01.848 }' 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.848 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.416 [2024-10-30 10:49:23.624080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.416 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:02.416 "name": "Existed_Raid", 00:21:02.416 "aliases": [ 00:21:02.416 "990d9019-4fe3-4c41-b8c3-89400aa73bbf" 00:21:02.416 ], 00:21:02.416 "product_name": "Raid Volume", 00:21:02.416 "block_size": 512, 00:21:02.416 "num_blocks": 196608, 00:21:02.416 "uuid": "990d9019-4fe3-4c41-b8c3-89400aa73bbf", 00:21:02.416 "assigned_rate_limits": { 00:21:02.416 "rw_ios_per_sec": 0, 00:21:02.416 "rw_mbytes_per_sec": 0, 00:21:02.416 "r_mbytes_per_sec": 0, 00:21:02.416 "w_mbytes_per_sec": 0 00:21:02.416 }, 00:21:02.416 "claimed": false, 00:21:02.416 "zoned": false, 00:21:02.416 "supported_io_types": { 00:21:02.416 "read": true, 00:21:02.416 "write": true, 00:21:02.416 "unmap": false, 00:21:02.416 "flush": false, 00:21:02.416 "reset": true, 00:21:02.416 "nvme_admin": false, 00:21:02.416 "nvme_io": false, 00:21:02.416 "nvme_io_md": false, 00:21:02.416 "write_zeroes": true, 00:21:02.416 "zcopy": false, 00:21:02.416 "get_zone_info": false, 00:21:02.416 "zone_management": false, 00:21:02.416 "zone_append": false, 00:21:02.416 "compare": false, 00:21:02.416 "compare_and_write": false, 00:21:02.416 "abort": false, 00:21:02.416 "seek_hole": false, 00:21:02.416 "seek_data": false, 00:21:02.417 "copy": false, 00:21:02.417 "nvme_iov_md": false 00:21:02.417 }, 00:21:02.417 "driver_specific": { 00:21:02.417 "raid": { 00:21:02.417 "uuid": "990d9019-4fe3-4c41-b8c3-89400aa73bbf", 00:21:02.417 "strip_size_kb": 64, 00:21:02.417 "state": "online", 00:21:02.417 "raid_level": "raid5f", 00:21:02.417 "superblock": false, 00:21:02.417 "num_base_bdevs": 4, 00:21:02.417 "num_base_bdevs_discovered": 4, 00:21:02.417 "num_base_bdevs_operational": 4, 00:21:02.417 "base_bdevs_list": [ 00:21:02.417 { 00:21:02.417 "name": "NewBaseBdev", 00:21:02.417 "uuid": "1e5cfd50-0f97-4527-a0c1-e05719d3ab8e", 00:21:02.417 "is_configured": true, 00:21:02.417 "data_offset": 0, 00:21:02.417 "data_size": 65536 00:21:02.417 }, 00:21:02.417 { 00:21:02.417 "name": "BaseBdev2", 00:21:02.417 "uuid": "82f226b8-42ea-4300-a0f8-25114334b85c", 00:21:02.417 "is_configured": true, 00:21:02.417 "data_offset": 0, 00:21:02.417 "data_size": 65536 00:21:02.417 }, 00:21:02.417 { 00:21:02.417 "name": "BaseBdev3", 00:21:02.417 "uuid": "2961d244-ebb4-484f-a823-49eac82feee9", 00:21:02.417 "is_configured": true, 00:21:02.417 "data_offset": 0, 00:21:02.417 "data_size": 65536 00:21:02.417 }, 00:21:02.417 { 00:21:02.417 "name": "BaseBdev4", 00:21:02.417 "uuid": "83772cf3-9b06-40dd-8a0e-b5a7e4ea5e31", 00:21:02.417 "is_configured": true, 00:21:02.417 "data_offset": 0, 00:21:02.417 "data_size": 65536 00:21:02.417 } 00:21:02.417 ] 00:21:02.417 } 00:21:02.417 } 00:21:02.417 }' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:02.417 BaseBdev2 00:21:02.417 BaseBdev3 00:21:02.417 BaseBdev4' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.417 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.676 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.676 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.677 [2024-10-30 10:49:23.987861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:02.677 [2024-10-30 10:49:23.987913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.677 [2024-10-30 10:49:23.988041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.677 [2024-10-30 10:49:23.988433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.677 [2024-10-30 10:49:23.988462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83310 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83310 ']' 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83310 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:02.677 10:49:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83310 00:21:02.677 10:49:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:02.677 10:49:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:02.677 killing process with pid 83310 00:21:02.677 10:49:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83310' 00:21:02.677 10:49:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83310 00:21:02.677 [2024-10-30 10:49:24.026676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:02.677 10:49:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83310 00:21:02.947 [2024-10-30 10:49:24.361633] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:03.923 10:49:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:03.923 00:21:03.923 real 0m12.831s 00:21:03.923 user 0m21.434s 00:21:03.923 sys 0m1.842s 00:21:03.923 10:49:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:03.923 10:49:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.923 ************************************ 00:21:03.923 END TEST raid5f_state_function_test 00:21:03.923 ************************************ 00:21:03.923 10:49:25 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:21:03.923 10:49:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:03.923 10:49:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:03.923 10:49:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:03.923 ************************************ 00:21:03.923 START TEST raid5f_state_function_test_sb 00:21:03.923 ************************************ 00:21:03.923 10:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:21:03.923 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83991 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83991' 00:21:04.183 Process raid pid: 83991 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83991 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83991 ']' 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:04.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.183 10:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.184 10:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:04.184 10:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.184 [2024-10-30 10:49:25.507948] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:21:04.184 [2024-10-30 10:49:25.508181] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.442 [2024-10-30 10:49:25.693845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.442 [2024-10-30 10:49:25.826788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.701 [2024-10-30 10:49:26.035356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:04.701 [2024-10-30 10:49:26.035412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.268 [2024-10-30 10:49:26.489649] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:05.268 [2024-10-30 10:49:26.489738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:05.268 [2024-10-30 10:49:26.489753] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:05.268 [2024-10-30 10:49:26.489776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:05.268 [2024-10-30 10:49:26.489785] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:05.268 [2024-10-30 10:49:26.489798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:05.268 [2024-10-30 10:49:26.489806] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:05.268 [2024-10-30 10:49:26.489820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.268 "name": "Existed_Raid", 00:21:05.268 "uuid": "e374d7b7-b152-4eb6-a1e4-cf911c11394a", 00:21:05.268 "strip_size_kb": 64, 00:21:05.268 "state": "configuring", 00:21:05.268 "raid_level": "raid5f", 00:21:05.268 "superblock": true, 00:21:05.268 "num_base_bdevs": 4, 00:21:05.268 "num_base_bdevs_discovered": 0, 00:21:05.268 "num_base_bdevs_operational": 4, 00:21:05.268 "base_bdevs_list": [ 00:21:05.268 { 00:21:05.268 "name": "BaseBdev1", 00:21:05.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.268 "is_configured": false, 00:21:05.268 "data_offset": 0, 00:21:05.268 "data_size": 0 00:21:05.268 }, 00:21:05.268 { 00:21:05.268 "name": "BaseBdev2", 00:21:05.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.268 "is_configured": false, 00:21:05.268 "data_offset": 0, 00:21:05.268 "data_size": 0 00:21:05.268 }, 00:21:05.268 { 00:21:05.268 "name": "BaseBdev3", 00:21:05.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.268 "is_configured": false, 00:21:05.268 "data_offset": 0, 00:21:05.268 "data_size": 0 00:21:05.268 }, 00:21:05.268 { 00:21:05.268 "name": "BaseBdev4", 00:21:05.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.268 "is_configured": false, 00:21:05.268 "data_offset": 0, 00:21:05.268 "data_size": 0 00:21:05.268 } 00:21:05.268 ] 00:21:05.268 }' 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.268 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.527 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:05.527 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.527 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.527 [2024-10-30 10:49:26.989731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:05.527 [2024-10-30 10:49:26.989797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:05.527 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.527 10:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:05.527 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.527 10:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.786 [2024-10-30 10:49:26.997735] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:05.786 [2024-10-30 10:49:26.997783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:05.786 [2024-10-30 10:49:26.997798] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:05.786 [2024-10-30 10:49:26.997813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:05.786 [2024-10-30 10:49:26.997822] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:05.786 [2024-10-30 10:49:26.997836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:05.786 [2024-10-30 10:49:26.997845] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:05.786 [2024-10-30 10:49:26.997858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.786 [2024-10-30 10:49:27.044368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:05.786 BaseBdev1 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.786 [ 00:21:05.786 { 00:21:05.786 "name": "BaseBdev1", 00:21:05.786 "aliases": [ 00:21:05.786 "d929d0d0-8bb0-410a-abe3-739e03188e6f" 00:21:05.786 ], 00:21:05.786 "product_name": "Malloc disk", 00:21:05.786 "block_size": 512, 00:21:05.786 "num_blocks": 65536, 00:21:05.786 "uuid": "d929d0d0-8bb0-410a-abe3-739e03188e6f", 00:21:05.786 "assigned_rate_limits": { 00:21:05.786 "rw_ios_per_sec": 0, 00:21:05.786 "rw_mbytes_per_sec": 0, 00:21:05.786 "r_mbytes_per_sec": 0, 00:21:05.786 "w_mbytes_per_sec": 0 00:21:05.786 }, 00:21:05.786 "claimed": true, 00:21:05.786 "claim_type": "exclusive_write", 00:21:05.786 "zoned": false, 00:21:05.786 "supported_io_types": { 00:21:05.786 "read": true, 00:21:05.786 "write": true, 00:21:05.786 "unmap": true, 00:21:05.786 "flush": true, 00:21:05.786 "reset": true, 00:21:05.786 "nvme_admin": false, 00:21:05.786 "nvme_io": false, 00:21:05.786 "nvme_io_md": false, 00:21:05.786 "write_zeroes": true, 00:21:05.786 "zcopy": true, 00:21:05.786 "get_zone_info": false, 00:21:05.786 "zone_management": false, 00:21:05.786 "zone_append": false, 00:21:05.786 "compare": false, 00:21:05.786 "compare_and_write": false, 00:21:05.786 "abort": true, 00:21:05.786 "seek_hole": false, 00:21:05.786 "seek_data": false, 00:21:05.786 "copy": true, 00:21:05.786 "nvme_iov_md": false 00:21:05.786 }, 00:21:05.786 "memory_domains": [ 00:21:05.786 { 00:21:05.786 "dma_device_id": "system", 00:21:05.786 "dma_device_type": 1 00:21:05.786 }, 00:21:05.786 { 00:21:05.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.786 "dma_device_type": 2 00:21:05.786 } 00:21:05.786 ], 00:21:05.786 "driver_specific": {} 00:21:05.786 } 00:21:05.786 ] 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.786 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.786 "name": "Existed_Raid", 00:21:05.786 "uuid": "d89e8345-0df2-4641-b1e7-92e93d96d4a6", 00:21:05.786 "strip_size_kb": 64, 00:21:05.786 "state": "configuring", 00:21:05.786 "raid_level": "raid5f", 00:21:05.786 "superblock": true, 00:21:05.786 "num_base_bdevs": 4, 00:21:05.786 "num_base_bdevs_discovered": 1, 00:21:05.786 "num_base_bdevs_operational": 4, 00:21:05.786 "base_bdevs_list": [ 00:21:05.786 { 00:21:05.786 "name": "BaseBdev1", 00:21:05.786 "uuid": "d929d0d0-8bb0-410a-abe3-739e03188e6f", 00:21:05.786 "is_configured": true, 00:21:05.786 "data_offset": 2048, 00:21:05.786 "data_size": 63488 00:21:05.787 }, 00:21:05.787 { 00:21:05.787 "name": "BaseBdev2", 00:21:05.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.787 "is_configured": false, 00:21:05.787 "data_offset": 0, 00:21:05.787 "data_size": 0 00:21:05.787 }, 00:21:05.787 { 00:21:05.787 "name": "BaseBdev3", 00:21:05.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.787 "is_configured": false, 00:21:05.787 "data_offset": 0, 00:21:05.787 "data_size": 0 00:21:05.787 }, 00:21:05.787 { 00:21:05.787 "name": "BaseBdev4", 00:21:05.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.787 "is_configured": false, 00:21:05.787 "data_offset": 0, 00:21:05.787 "data_size": 0 00:21:05.787 } 00:21:05.787 ] 00:21:05.787 }' 00:21:05.787 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.787 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.354 [2024-10-30 10:49:27.600731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:06.354 [2024-10-30 10:49:27.600811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.354 [2024-10-30 10:49:27.612746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.354 [2024-10-30 10:49:27.615401] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:06.354 [2024-10-30 10:49:27.615456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:06.354 [2024-10-30 10:49:27.615472] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:06.354 [2024-10-30 10:49:27.615489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:06.354 [2024-10-30 10:49:27.615514] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:06.354 [2024-10-30 10:49:27.615552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.354 "name": "Existed_Raid", 00:21:06.354 "uuid": "de84b0f1-9cb6-41e7-b22f-b2ec677b564c", 00:21:06.354 "strip_size_kb": 64, 00:21:06.354 "state": "configuring", 00:21:06.354 "raid_level": "raid5f", 00:21:06.354 "superblock": true, 00:21:06.354 "num_base_bdevs": 4, 00:21:06.354 "num_base_bdevs_discovered": 1, 00:21:06.354 "num_base_bdevs_operational": 4, 00:21:06.354 "base_bdevs_list": [ 00:21:06.354 { 00:21:06.354 "name": "BaseBdev1", 00:21:06.354 "uuid": "d929d0d0-8bb0-410a-abe3-739e03188e6f", 00:21:06.354 "is_configured": true, 00:21:06.354 "data_offset": 2048, 00:21:06.354 "data_size": 63488 00:21:06.354 }, 00:21:06.354 { 00:21:06.354 "name": "BaseBdev2", 00:21:06.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.354 "is_configured": false, 00:21:06.354 "data_offset": 0, 00:21:06.354 "data_size": 0 00:21:06.354 }, 00:21:06.354 { 00:21:06.354 "name": "BaseBdev3", 00:21:06.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.354 "is_configured": false, 00:21:06.354 "data_offset": 0, 00:21:06.354 "data_size": 0 00:21:06.354 }, 00:21:06.354 { 00:21:06.354 "name": "BaseBdev4", 00:21:06.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.354 "is_configured": false, 00:21:06.354 "data_offset": 0, 00:21:06.354 "data_size": 0 00:21:06.354 } 00:21:06.354 ] 00:21:06.354 }' 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.354 10:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 [2024-10-30 10:49:28.151677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.922 BaseBdev2 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 [ 00:21:06.922 { 00:21:06.922 "name": "BaseBdev2", 00:21:06.922 "aliases": [ 00:21:06.922 "6e14dff9-6df3-4084-99d6-d23739a5e8cf" 00:21:06.922 ], 00:21:06.922 "product_name": "Malloc disk", 00:21:06.922 "block_size": 512, 00:21:06.922 "num_blocks": 65536, 00:21:06.922 "uuid": "6e14dff9-6df3-4084-99d6-d23739a5e8cf", 00:21:06.922 "assigned_rate_limits": { 00:21:06.922 "rw_ios_per_sec": 0, 00:21:06.922 "rw_mbytes_per_sec": 0, 00:21:06.922 "r_mbytes_per_sec": 0, 00:21:06.922 "w_mbytes_per_sec": 0 00:21:06.922 }, 00:21:06.922 "claimed": true, 00:21:06.922 "claim_type": "exclusive_write", 00:21:06.922 "zoned": false, 00:21:06.922 "supported_io_types": { 00:21:06.922 "read": true, 00:21:06.922 "write": true, 00:21:06.922 "unmap": true, 00:21:06.922 "flush": true, 00:21:06.922 "reset": true, 00:21:06.922 "nvme_admin": false, 00:21:06.922 "nvme_io": false, 00:21:06.922 "nvme_io_md": false, 00:21:06.922 "write_zeroes": true, 00:21:06.922 "zcopy": true, 00:21:06.922 "get_zone_info": false, 00:21:06.922 "zone_management": false, 00:21:06.922 "zone_append": false, 00:21:06.922 "compare": false, 00:21:06.922 "compare_and_write": false, 00:21:06.922 "abort": true, 00:21:06.922 "seek_hole": false, 00:21:06.922 "seek_data": false, 00:21:06.922 "copy": true, 00:21:06.922 "nvme_iov_md": false 00:21:06.922 }, 00:21:06.922 "memory_domains": [ 00:21:06.922 { 00:21:06.922 "dma_device_id": "system", 00:21:06.922 "dma_device_type": 1 00:21:06.922 }, 00:21:06.922 { 00:21:06.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.922 "dma_device_type": 2 00:21:06.922 } 00:21:06.922 ], 00:21:06.922 "driver_specific": {} 00:21:06.922 } 00:21:06.922 ] 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.922 "name": "Existed_Raid", 00:21:06.922 "uuid": "de84b0f1-9cb6-41e7-b22f-b2ec677b564c", 00:21:06.922 "strip_size_kb": 64, 00:21:06.922 "state": "configuring", 00:21:06.922 "raid_level": "raid5f", 00:21:06.922 "superblock": true, 00:21:06.922 "num_base_bdevs": 4, 00:21:06.922 "num_base_bdevs_discovered": 2, 00:21:06.922 "num_base_bdevs_operational": 4, 00:21:06.922 "base_bdevs_list": [ 00:21:06.922 { 00:21:06.922 "name": "BaseBdev1", 00:21:06.922 "uuid": "d929d0d0-8bb0-410a-abe3-739e03188e6f", 00:21:06.922 "is_configured": true, 00:21:06.922 "data_offset": 2048, 00:21:06.922 "data_size": 63488 00:21:06.922 }, 00:21:06.922 { 00:21:06.922 "name": "BaseBdev2", 00:21:06.922 "uuid": "6e14dff9-6df3-4084-99d6-d23739a5e8cf", 00:21:06.922 "is_configured": true, 00:21:06.922 "data_offset": 2048, 00:21:06.922 "data_size": 63488 00:21:06.922 }, 00:21:06.922 { 00:21:06.922 "name": "BaseBdev3", 00:21:06.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.922 "is_configured": false, 00:21:06.922 "data_offset": 0, 00:21:06.922 "data_size": 0 00:21:06.922 }, 00:21:06.922 { 00:21:06.922 "name": "BaseBdev4", 00:21:06.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.922 "is_configured": false, 00:21:06.922 "data_offset": 0, 00:21:06.922 "data_size": 0 00:21:06.922 } 00:21:06.922 ] 00:21:06.922 }' 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.922 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.490 [2024-10-30 10:49:28.743362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:07.490 BaseBdev3 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.490 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.490 [ 00:21:07.490 { 00:21:07.490 "name": "BaseBdev3", 00:21:07.490 "aliases": [ 00:21:07.490 "d320fe9f-993b-4db6-baec-20f51ac69366" 00:21:07.490 ], 00:21:07.490 "product_name": "Malloc disk", 00:21:07.490 "block_size": 512, 00:21:07.490 "num_blocks": 65536, 00:21:07.490 "uuid": "d320fe9f-993b-4db6-baec-20f51ac69366", 00:21:07.490 "assigned_rate_limits": { 00:21:07.491 "rw_ios_per_sec": 0, 00:21:07.491 "rw_mbytes_per_sec": 0, 00:21:07.491 "r_mbytes_per_sec": 0, 00:21:07.491 "w_mbytes_per_sec": 0 00:21:07.491 }, 00:21:07.491 "claimed": true, 00:21:07.491 "claim_type": "exclusive_write", 00:21:07.491 "zoned": false, 00:21:07.491 "supported_io_types": { 00:21:07.491 "read": true, 00:21:07.491 "write": true, 00:21:07.491 "unmap": true, 00:21:07.491 "flush": true, 00:21:07.491 "reset": true, 00:21:07.491 "nvme_admin": false, 00:21:07.491 "nvme_io": false, 00:21:07.491 "nvme_io_md": false, 00:21:07.491 "write_zeroes": true, 00:21:07.491 "zcopy": true, 00:21:07.491 "get_zone_info": false, 00:21:07.491 "zone_management": false, 00:21:07.491 "zone_append": false, 00:21:07.491 "compare": false, 00:21:07.491 "compare_and_write": false, 00:21:07.491 "abort": true, 00:21:07.491 "seek_hole": false, 00:21:07.491 "seek_data": false, 00:21:07.491 "copy": true, 00:21:07.491 "nvme_iov_md": false 00:21:07.491 }, 00:21:07.491 "memory_domains": [ 00:21:07.491 { 00:21:07.491 "dma_device_id": "system", 00:21:07.491 "dma_device_type": 1 00:21:07.491 }, 00:21:07.491 { 00:21:07.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.491 "dma_device_type": 2 00:21:07.491 } 00:21:07.491 ], 00:21:07.491 "driver_specific": {} 00:21:07.491 } 00:21:07.491 ] 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.491 "name": "Existed_Raid", 00:21:07.491 "uuid": "de84b0f1-9cb6-41e7-b22f-b2ec677b564c", 00:21:07.491 "strip_size_kb": 64, 00:21:07.491 "state": "configuring", 00:21:07.491 "raid_level": "raid5f", 00:21:07.491 "superblock": true, 00:21:07.491 "num_base_bdevs": 4, 00:21:07.491 "num_base_bdevs_discovered": 3, 00:21:07.491 "num_base_bdevs_operational": 4, 00:21:07.491 "base_bdevs_list": [ 00:21:07.491 { 00:21:07.491 "name": "BaseBdev1", 00:21:07.491 "uuid": "d929d0d0-8bb0-410a-abe3-739e03188e6f", 00:21:07.491 "is_configured": true, 00:21:07.491 "data_offset": 2048, 00:21:07.491 "data_size": 63488 00:21:07.491 }, 00:21:07.491 { 00:21:07.491 "name": "BaseBdev2", 00:21:07.491 "uuid": "6e14dff9-6df3-4084-99d6-d23739a5e8cf", 00:21:07.491 "is_configured": true, 00:21:07.491 "data_offset": 2048, 00:21:07.491 "data_size": 63488 00:21:07.491 }, 00:21:07.491 { 00:21:07.491 "name": "BaseBdev3", 00:21:07.491 "uuid": "d320fe9f-993b-4db6-baec-20f51ac69366", 00:21:07.491 "is_configured": true, 00:21:07.491 "data_offset": 2048, 00:21:07.491 "data_size": 63488 00:21:07.491 }, 00:21:07.491 { 00:21:07.491 "name": "BaseBdev4", 00:21:07.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.491 "is_configured": false, 00:21:07.491 "data_offset": 0, 00:21:07.491 "data_size": 0 00:21:07.491 } 00:21:07.491 ] 00:21:07.491 }' 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.491 10:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.060 [2024-10-30 10:49:29.348008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:08.060 [2024-10-30 10:49:29.348440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:08.060 [2024-10-30 10:49:29.348462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:08.060 [2024-10-30 10:49:29.348791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:08.060 BaseBdev4 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.060 [2024-10-30 10:49:29.355938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:08.060 [2024-10-30 10:49:29.356016] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:08.060 [2024-10-30 10:49:29.356326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.060 [ 00:21:08.060 { 00:21:08.060 "name": "BaseBdev4", 00:21:08.060 "aliases": [ 00:21:08.060 "61508ce2-c546-4660-9ff2-7a1d9f7d935a" 00:21:08.060 ], 00:21:08.060 "product_name": "Malloc disk", 00:21:08.060 "block_size": 512, 00:21:08.060 "num_blocks": 65536, 00:21:08.060 "uuid": "61508ce2-c546-4660-9ff2-7a1d9f7d935a", 00:21:08.060 "assigned_rate_limits": { 00:21:08.060 "rw_ios_per_sec": 0, 00:21:08.060 "rw_mbytes_per_sec": 0, 00:21:08.060 "r_mbytes_per_sec": 0, 00:21:08.060 "w_mbytes_per_sec": 0 00:21:08.060 }, 00:21:08.060 "claimed": true, 00:21:08.060 "claim_type": "exclusive_write", 00:21:08.060 "zoned": false, 00:21:08.060 "supported_io_types": { 00:21:08.060 "read": true, 00:21:08.060 "write": true, 00:21:08.060 "unmap": true, 00:21:08.060 "flush": true, 00:21:08.060 "reset": true, 00:21:08.060 "nvme_admin": false, 00:21:08.060 "nvme_io": false, 00:21:08.060 "nvme_io_md": false, 00:21:08.060 "write_zeroes": true, 00:21:08.060 "zcopy": true, 00:21:08.060 "get_zone_info": false, 00:21:08.060 "zone_management": false, 00:21:08.060 "zone_append": false, 00:21:08.060 "compare": false, 00:21:08.060 "compare_and_write": false, 00:21:08.060 "abort": true, 00:21:08.060 "seek_hole": false, 00:21:08.060 "seek_data": false, 00:21:08.060 "copy": true, 00:21:08.060 "nvme_iov_md": false 00:21:08.060 }, 00:21:08.060 "memory_domains": [ 00:21:08.060 { 00:21:08.060 "dma_device_id": "system", 00:21:08.060 "dma_device_type": 1 00:21:08.060 }, 00:21:08.060 { 00:21:08.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.060 "dma_device_type": 2 00:21:08.060 } 00:21:08.060 ], 00:21:08.060 "driver_specific": {} 00:21:08.060 } 00:21:08.060 ] 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.060 "name": "Existed_Raid", 00:21:08.060 "uuid": "de84b0f1-9cb6-41e7-b22f-b2ec677b564c", 00:21:08.060 "strip_size_kb": 64, 00:21:08.060 "state": "online", 00:21:08.060 "raid_level": "raid5f", 00:21:08.060 "superblock": true, 00:21:08.060 "num_base_bdevs": 4, 00:21:08.060 "num_base_bdevs_discovered": 4, 00:21:08.060 "num_base_bdevs_operational": 4, 00:21:08.060 "base_bdevs_list": [ 00:21:08.060 { 00:21:08.060 "name": "BaseBdev1", 00:21:08.060 "uuid": "d929d0d0-8bb0-410a-abe3-739e03188e6f", 00:21:08.060 "is_configured": true, 00:21:08.060 "data_offset": 2048, 00:21:08.060 "data_size": 63488 00:21:08.060 }, 00:21:08.060 { 00:21:08.060 "name": "BaseBdev2", 00:21:08.060 "uuid": "6e14dff9-6df3-4084-99d6-d23739a5e8cf", 00:21:08.060 "is_configured": true, 00:21:08.060 "data_offset": 2048, 00:21:08.060 "data_size": 63488 00:21:08.060 }, 00:21:08.060 { 00:21:08.060 "name": "BaseBdev3", 00:21:08.060 "uuid": "d320fe9f-993b-4db6-baec-20f51ac69366", 00:21:08.060 "is_configured": true, 00:21:08.060 "data_offset": 2048, 00:21:08.060 "data_size": 63488 00:21:08.060 }, 00:21:08.060 { 00:21:08.060 "name": "BaseBdev4", 00:21:08.060 "uuid": "61508ce2-c546-4660-9ff2-7a1d9f7d935a", 00:21:08.060 "is_configured": true, 00:21:08.060 "data_offset": 2048, 00:21:08.060 "data_size": 63488 00:21:08.060 } 00:21:08.060 ] 00:21:08.060 }' 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.060 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:08.628 [2024-10-30 10:49:29.928121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.628 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:08.628 "name": "Existed_Raid", 00:21:08.628 "aliases": [ 00:21:08.628 "de84b0f1-9cb6-41e7-b22f-b2ec677b564c" 00:21:08.628 ], 00:21:08.628 "product_name": "Raid Volume", 00:21:08.628 "block_size": 512, 00:21:08.628 "num_blocks": 190464, 00:21:08.628 "uuid": "de84b0f1-9cb6-41e7-b22f-b2ec677b564c", 00:21:08.628 "assigned_rate_limits": { 00:21:08.628 "rw_ios_per_sec": 0, 00:21:08.628 "rw_mbytes_per_sec": 0, 00:21:08.628 "r_mbytes_per_sec": 0, 00:21:08.628 "w_mbytes_per_sec": 0 00:21:08.628 }, 00:21:08.628 "claimed": false, 00:21:08.628 "zoned": false, 00:21:08.628 "supported_io_types": { 00:21:08.628 "read": true, 00:21:08.628 "write": true, 00:21:08.628 "unmap": false, 00:21:08.628 "flush": false, 00:21:08.628 "reset": true, 00:21:08.628 "nvme_admin": false, 00:21:08.628 "nvme_io": false, 00:21:08.628 "nvme_io_md": false, 00:21:08.628 "write_zeroes": true, 00:21:08.628 "zcopy": false, 00:21:08.628 "get_zone_info": false, 00:21:08.628 "zone_management": false, 00:21:08.628 "zone_append": false, 00:21:08.628 "compare": false, 00:21:08.628 "compare_and_write": false, 00:21:08.628 "abort": false, 00:21:08.628 "seek_hole": false, 00:21:08.628 "seek_data": false, 00:21:08.628 "copy": false, 00:21:08.628 "nvme_iov_md": false 00:21:08.628 }, 00:21:08.628 "driver_specific": { 00:21:08.628 "raid": { 00:21:08.628 "uuid": "de84b0f1-9cb6-41e7-b22f-b2ec677b564c", 00:21:08.628 "strip_size_kb": 64, 00:21:08.628 "state": "online", 00:21:08.628 "raid_level": "raid5f", 00:21:08.628 "superblock": true, 00:21:08.628 "num_base_bdevs": 4, 00:21:08.628 "num_base_bdevs_discovered": 4, 00:21:08.628 "num_base_bdevs_operational": 4, 00:21:08.628 "base_bdevs_list": [ 00:21:08.628 { 00:21:08.628 "name": "BaseBdev1", 00:21:08.628 "uuid": "d929d0d0-8bb0-410a-abe3-739e03188e6f", 00:21:08.628 "is_configured": true, 00:21:08.628 "data_offset": 2048, 00:21:08.628 "data_size": 63488 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "name": "BaseBdev2", 00:21:08.628 "uuid": "6e14dff9-6df3-4084-99d6-d23739a5e8cf", 00:21:08.628 "is_configured": true, 00:21:08.628 "data_offset": 2048, 00:21:08.628 "data_size": 63488 00:21:08.628 }, 00:21:08.628 { 00:21:08.628 "name": "BaseBdev3", 00:21:08.628 "uuid": "d320fe9f-993b-4db6-baec-20f51ac69366", 00:21:08.628 "is_configured": true, 00:21:08.629 "data_offset": 2048, 00:21:08.629 "data_size": 63488 00:21:08.629 }, 00:21:08.629 { 00:21:08.629 "name": "BaseBdev4", 00:21:08.629 "uuid": "61508ce2-c546-4660-9ff2-7a1d9f7d935a", 00:21:08.629 "is_configured": true, 00:21:08.629 "data_offset": 2048, 00:21:08.629 "data_size": 63488 00:21:08.629 } 00:21:08.629 ] 00:21:08.629 } 00:21:08.629 } 00:21:08.629 }' 00:21:08.629 10:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:08.629 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:08.629 BaseBdev2 00:21:08.629 BaseBdev3 00:21:08.629 BaseBdev4' 00:21:08.629 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:08.629 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:08.629 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:08.629 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:08.629 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.629 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.629 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.888 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.888 [2024-10-30 10:49:30.331979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.147 "name": "Existed_Raid", 00:21:09.147 "uuid": "de84b0f1-9cb6-41e7-b22f-b2ec677b564c", 00:21:09.147 "strip_size_kb": 64, 00:21:09.147 "state": "online", 00:21:09.147 "raid_level": "raid5f", 00:21:09.147 "superblock": true, 00:21:09.147 "num_base_bdevs": 4, 00:21:09.147 "num_base_bdevs_discovered": 3, 00:21:09.147 "num_base_bdevs_operational": 3, 00:21:09.147 "base_bdevs_list": [ 00:21:09.147 { 00:21:09.147 "name": null, 00:21:09.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.147 "is_configured": false, 00:21:09.147 "data_offset": 0, 00:21:09.147 "data_size": 63488 00:21:09.147 }, 00:21:09.147 { 00:21:09.147 "name": "BaseBdev2", 00:21:09.147 "uuid": "6e14dff9-6df3-4084-99d6-d23739a5e8cf", 00:21:09.147 "is_configured": true, 00:21:09.147 "data_offset": 2048, 00:21:09.147 "data_size": 63488 00:21:09.147 }, 00:21:09.147 { 00:21:09.147 "name": "BaseBdev3", 00:21:09.147 "uuid": "d320fe9f-993b-4db6-baec-20f51ac69366", 00:21:09.147 "is_configured": true, 00:21:09.147 "data_offset": 2048, 00:21:09.147 "data_size": 63488 00:21:09.147 }, 00:21:09.147 { 00:21:09.147 "name": "BaseBdev4", 00:21:09.147 "uuid": "61508ce2-c546-4660-9ff2-7a1d9f7d935a", 00:21:09.147 "is_configured": true, 00:21:09.147 "data_offset": 2048, 00:21:09.147 "data_size": 63488 00:21:09.147 } 00:21:09.147 ] 00:21:09.147 }' 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.147 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.716 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:09.716 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:09.716 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:09.716 10:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.716 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.716 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.716 10:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.716 [2024-10-30 10:49:31.025398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:09.716 [2024-10-30 10:49:31.025638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:09.716 [2024-10-30 10:49:31.108280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.716 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.716 [2024-10-30 10:49:31.180320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.975 [2024-10-30 10:49:31.338371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:09.975 [2024-10-30 10:49:31.338435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.975 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.235 BaseBdev2 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.235 [ 00:21:10.235 { 00:21:10.235 "name": "BaseBdev2", 00:21:10.235 "aliases": [ 00:21:10.235 "0d42307d-0c68-41e5-aee1-e851aeef6035" 00:21:10.235 ], 00:21:10.235 "product_name": "Malloc disk", 00:21:10.235 "block_size": 512, 00:21:10.235 "num_blocks": 65536, 00:21:10.235 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:10.235 "assigned_rate_limits": { 00:21:10.235 "rw_ios_per_sec": 0, 00:21:10.235 "rw_mbytes_per_sec": 0, 00:21:10.235 "r_mbytes_per_sec": 0, 00:21:10.235 "w_mbytes_per_sec": 0 00:21:10.235 }, 00:21:10.235 "claimed": false, 00:21:10.235 "zoned": false, 00:21:10.235 "supported_io_types": { 00:21:10.235 "read": true, 00:21:10.235 "write": true, 00:21:10.235 "unmap": true, 00:21:10.235 "flush": true, 00:21:10.235 "reset": true, 00:21:10.235 "nvme_admin": false, 00:21:10.235 "nvme_io": false, 00:21:10.235 "nvme_io_md": false, 00:21:10.235 "write_zeroes": true, 00:21:10.235 "zcopy": true, 00:21:10.235 "get_zone_info": false, 00:21:10.235 "zone_management": false, 00:21:10.235 "zone_append": false, 00:21:10.235 "compare": false, 00:21:10.235 "compare_and_write": false, 00:21:10.235 "abort": true, 00:21:10.235 "seek_hole": false, 00:21:10.235 "seek_data": false, 00:21:10.235 "copy": true, 00:21:10.235 "nvme_iov_md": false 00:21:10.235 }, 00:21:10.235 "memory_domains": [ 00:21:10.235 { 00:21:10.235 "dma_device_id": "system", 00:21:10.235 "dma_device_type": 1 00:21:10.235 }, 00:21:10.235 { 00:21:10.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.235 "dma_device_type": 2 00:21:10.235 } 00:21:10.235 ], 00:21:10.235 "driver_specific": {} 00:21:10.235 } 00:21:10.235 ] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.235 BaseBdev3 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.235 [ 00:21:10.235 { 00:21:10.235 "name": "BaseBdev3", 00:21:10.235 "aliases": [ 00:21:10.235 "549d3028-696c-4fbd-8089-3460d2d5471b" 00:21:10.235 ], 00:21:10.235 "product_name": "Malloc disk", 00:21:10.235 "block_size": 512, 00:21:10.235 "num_blocks": 65536, 00:21:10.235 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:10.235 "assigned_rate_limits": { 00:21:10.235 "rw_ios_per_sec": 0, 00:21:10.235 "rw_mbytes_per_sec": 0, 00:21:10.235 "r_mbytes_per_sec": 0, 00:21:10.235 "w_mbytes_per_sec": 0 00:21:10.235 }, 00:21:10.235 "claimed": false, 00:21:10.235 "zoned": false, 00:21:10.235 "supported_io_types": { 00:21:10.235 "read": true, 00:21:10.235 "write": true, 00:21:10.235 "unmap": true, 00:21:10.235 "flush": true, 00:21:10.235 "reset": true, 00:21:10.235 "nvme_admin": false, 00:21:10.235 "nvme_io": false, 00:21:10.235 "nvme_io_md": false, 00:21:10.235 "write_zeroes": true, 00:21:10.235 "zcopy": true, 00:21:10.235 "get_zone_info": false, 00:21:10.235 "zone_management": false, 00:21:10.235 "zone_append": false, 00:21:10.235 "compare": false, 00:21:10.235 "compare_and_write": false, 00:21:10.235 "abort": true, 00:21:10.235 "seek_hole": false, 00:21:10.235 "seek_data": false, 00:21:10.235 "copy": true, 00:21:10.235 "nvme_iov_md": false 00:21:10.235 }, 00:21:10.235 "memory_domains": [ 00:21:10.235 { 00:21:10.235 "dma_device_id": "system", 00:21:10.235 "dma_device_type": 1 00:21:10.235 }, 00:21:10.235 { 00:21:10.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.235 "dma_device_type": 2 00:21:10.235 } 00:21:10.235 ], 00:21:10.235 "driver_specific": {} 00:21:10.235 } 00:21:10.235 ] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.235 BaseBdev4 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:10.235 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.236 [ 00:21:10.236 { 00:21:10.236 "name": "BaseBdev4", 00:21:10.236 "aliases": [ 00:21:10.236 "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869" 00:21:10.236 ], 00:21:10.236 "product_name": "Malloc disk", 00:21:10.236 "block_size": 512, 00:21:10.236 "num_blocks": 65536, 00:21:10.236 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:10.236 "assigned_rate_limits": { 00:21:10.236 "rw_ios_per_sec": 0, 00:21:10.236 "rw_mbytes_per_sec": 0, 00:21:10.236 "r_mbytes_per_sec": 0, 00:21:10.236 "w_mbytes_per_sec": 0 00:21:10.236 }, 00:21:10.236 "claimed": false, 00:21:10.236 "zoned": false, 00:21:10.236 "supported_io_types": { 00:21:10.236 "read": true, 00:21:10.236 "write": true, 00:21:10.236 "unmap": true, 00:21:10.236 "flush": true, 00:21:10.236 "reset": true, 00:21:10.236 "nvme_admin": false, 00:21:10.236 "nvme_io": false, 00:21:10.236 "nvme_io_md": false, 00:21:10.236 "write_zeroes": true, 00:21:10.236 "zcopy": true, 00:21:10.236 "get_zone_info": false, 00:21:10.236 "zone_management": false, 00:21:10.236 "zone_append": false, 00:21:10.236 "compare": false, 00:21:10.236 "compare_and_write": false, 00:21:10.236 "abort": true, 00:21:10.236 "seek_hole": false, 00:21:10.236 "seek_data": false, 00:21:10.236 "copy": true, 00:21:10.236 "nvme_iov_md": false 00:21:10.236 }, 00:21:10.236 "memory_domains": [ 00:21:10.236 { 00:21:10.236 "dma_device_id": "system", 00:21:10.236 "dma_device_type": 1 00:21:10.236 }, 00:21:10.236 { 00:21:10.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:10.236 "dma_device_type": 2 00:21:10.236 } 00:21:10.236 ], 00:21:10.236 "driver_specific": {} 00:21:10.236 } 00:21:10.236 ] 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.236 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.494 [2024-10-30 10:49:31.703907] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:10.494 [2024-10-30 10:49:31.703957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:10.494 [2024-10-30 10:49:31.704032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:10.494 [2024-10-30 10:49:31.706498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:10.494 [2024-10-30 10:49:31.706571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.494 "name": "Existed_Raid", 00:21:10.494 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:10.494 "strip_size_kb": 64, 00:21:10.494 "state": "configuring", 00:21:10.494 "raid_level": "raid5f", 00:21:10.494 "superblock": true, 00:21:10.494 "num_base_bdevs": 4, 00:21:10.494 "num_base_bdevs_discovered": 3, 00:21:10.494 "num_base_bdevs_operational": 4, 00:21:10.494 "base_bdevs_list": [ 00:21:10.494 { 00:21:10.494 "name": "BaseBdev1", 00:21:10.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.494 "is_configured": false, 00:21:10.494 "data_offset": 0, 00:21:10.494 "data_size": 0 00:21:10.494 }, 00:21:10.494 { 00:21:10.494 "name": "BaseBdev2", 00:21:10.494 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:10.494 "is_configured": true, 00:21:10.494 "data_offset": 2048, 00:21:10.494 "data_size": 63488 00:21:10.494 }, 00:21:10.494 { 00:21:10.494 "name": "BaseBdev3", 00:21:10.494 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:10.494 "is_configured": true, 00:21:10.494 "data_offset": 2048, 00:21:10.494 "data_size": 63488 00:21:10.494 }, 00:21:10.494 { 00:21:10.494 "name": "BaseBdev4", 00:21:10.494 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:10.494 "is_configured": true, 00:21:10.494 "data_offset": 2048, 00:21:10.494 "data_size": 63488 00:21:10.494 } 00:21:10.494 ] 00:21:10.494 }' 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.494 10:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.062 [2024-10-30 10:49:32.256146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.062 "name": "Existed_Raid", 00:21:11.062 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:11.062 "strip_size_kb": 64, 00:21:11.062 "state": "configuring", 00:21:11.062 "raid_level": "raid5f", 00:21:11.062 "superblock": true, 00:21:11.062 "num_base_bdevs": 4, 00:21:11.062 "num_base_bdevs_discovered": 2, 00:21:11.062 "num_base_bdevs_operational": 4, 00:21:11.062 "base_bdevs_list": [ 00:21:11.062 { 00:21:11.062 "name": "BaseBdev1", 00:21:11.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.062 "is_configured": false, 00:21:11.062 "data_offset": 0, 00:21:11.062 "data_size": 0 00:21:11.062 }, 00:21:11.062 { 00:21:11.062 "name": null, 00:21:11.062 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:11.062 "is_configured": false, 00:21:11.062 "data_offset": 0, 00:21:11.062 "data_size": 63488 00:21:11.062 }, 00:21:11.062 { 00:21:11.062 "name": "BaseBdev3", 00:21:11.062 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:11.062 "is_configured": true, 00:21:11.062 "data_offset": 2048, 00:21:11.062 "data_size": 63488 00:21:11.062 }, 00:21:11.062 { 00:21:11.062 "name": "BaseBdev4", 00:21:11.062 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:11.062 "is_configured": true, 00:21:11.062 "data_offset": 2048, 00:21:11.062 "data_size": 63488 00:21:11.062 } 00:21:11.062 ] 00:21:11.062 }' 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.062 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.630 [2024-10-30 10:49:32.903131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.630 BaseBdev1 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.630 [ 00:21:11.630 { 00:21:11.630 "name": "BaseBdev1", 00:21:11.630 "aliases": [ 00:21:11.630 "df872e91-30aa-41fb-b6cf-692e276e33a2" 00:21:11.630 ], 00:21:11.630 "product_name": "Malloc disk", 00:21:11.630 "block_size": 512, 00:21:11.630 "num_blocks": 65536, 00:21:11.630 "uuid": "df872e91-30aa-41fb-b6cf-692e276e33a2", 00:21:11.630 "assigned_rate_limits": { 00:21:11.630 "rw_ios_per_sec": 0, 00:21:11.630 "rw_mbytes_per_sec": 0, 00:21:11.630 "r_mbytes_per_sec": 0, 00:21:11.630 "w_mbytes_per_sec": 0 00:21:11.630 }, 00:21:11.630 "claimed": true, 00:21:11.630 "claim_type": "exclusive_write", 00:21:11.630 "zoned": false, 00:21:11.630 "supported_io_types": { 00:21:11.630 "read": true, 00:21:11.630 "write": true, 00:21:11.630 "unmap": true, 00:21:11.630 "flush": true, 00:21:11.630 "reset": true, 00:21:11.630 "nvme_admin": false, 00:21:11.630 "nvme_io": false, 00:21:11.630 "nvme_io_md": false, 00:21:11.630 "write_zeroes": true, 00:21:11.630 "zcopy": true, 00:21:11.630 "get_zone_info": false, 00:21:11.630 "zone_management": false, 00:21:11.630 "zone_append": false, 00:21:11.630 "compare": false, 00:21:11.630 "compare_and_write": false, 00:21:11.630 "abort": true, 00:21:11.630 "seek_hole": false, 00:21:11.630 "seek_data": false, 00:21:11.630 "copy": true, 00:21:11.630 "nvme_iov_md": false 00:21:11.630 }, 00:21:11.630 "memory_domains": [ 00:21:11.630 { 00:21:11.630 "dma_device_id": "system", 00:21:11.630 "dma_device_type": 1 00:21:11.630 }, 00:21:11.630 { 00:21:11.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.630 "dma_device_type": 2 00:21:11.630 } 00:21:11.630 ], 00:21:11.630 "driver_specific": {} 00:21:11.630 } 00:21:11.630 ] 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.630 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.630 "name": "Existed_Raid", 00:21:11.630 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:11.630 "strip_size_kb": 64, 00:21:11.630 "state": "configuring", 00:21:11.630 "raid_level": "raid5f", 00:21:11.630 "superblock": true, 00:21:11.630 "num_base_bdevs": 4, 00:21:11.630 "num_base_bdevs_discovered": 3, 00:21:11.630 "num_base_bdevs_operational": 4, 00:21:11.630 "base_bdevs_list": [ 00:21:11.630 { 00:21:11.631 "name": "BaseBdev1", 00:21:11.631 "uuid": "df872e91-30aa-41fb-b6cf-692e276e33a2", 00:21:11.631 "is_configured": true, 00:21:11.631 "data_offset": 2048, 00:21:11.631 "data_size": 63488 00:21:11.631 }, 00:21:11.631 { 00:21:11.631 "name": null, 00:21:11.631 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:11.631 "is_configured": false, 00:21:11.631 "data_offset": 0, 00:21:11.631 "data_size": 63488 00:21:11.631 }, 00:21:11.631 { 00:21:11.631 "name": "BaseBdev3", 00:21:11.631 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:11.631 "is_configured": true, 00:21:11.631 "data_offset": 2048, 00:21:11.631 "data_size": 63488 00:21:11.631 }, 00:21:11.631 { 00:21:11.631 "name": "BaseBdev4", 00:21:11.631 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:11.631 "is_configured": true, 00:21:11.631 "data_offset": 2048, 00:21:11.631 "data_size": 63488 00:21:11.631 } 00:21:11.631 ] 00:21:11.631 }' 00:21:11.631 10:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.631 10:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.199 [2024-10-30 10:49:33.507465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.199 "name": "Existed_Raid", 00:21:12.199 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:12.199 "strip_size_kb": 64, 00:21:12.199 "state": "configuring", 00:21:12.199 "raid_level": "raid5f", 00:21:12.199 "superblock": true, 00:21:12.199 "num_base_bdevs": 4, 00:21:12.199 "num_base_bdevs_discovered": 2, 00:21:12.199 "num_base_bdevs_operational": 4, 00:21:12.199 "base_bdevs_list": [ 00:21:12.199 { 00:21:12.199 "name": "BaseBdev1", 00:21:12.199 "uuid": "df872e91-30aa-41fb-b6cf-692e276e33a2", 00:21:12.199 "is_configured": true, 00:21:12.199 "data_offset": 2048, 00:21:12.199 "data_size": 63488 00:21:12.199 }, 00:21:12.199 { 00:21:12.199 "name": null, 00:21:12.199 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:12.199 "is_configured": false, 00:21:12.199 "data_offset": 0, 00:21:12.199 "data_size": 63488 00:21:12.199 }, 00:21:12.199 { 00:21:12.199 "name": null, 00:21:12.199 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:12.199 "is_configured": false, 00:21:12.199 "data_offset": 0, 00:21:12.199 "data_size": 63488 00:21:12.199 }, 00:21:12.199 { 00:21:12.199 "name": "BaseBdev4", 00:21:12.199 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:12.199 "is_configured": true, 00:21:12.199 "data_offset": 2048, 00:21:12.199 "data_size": 63488 00:21:12.199 } 00:21:12.199 ] 00:21:12.199 }' 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.199 10:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.770 [2024-10-30 10:49:34.119645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.770 "name": "Existed_Raid", 00:21:12.770 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:12.770 "strip_size_kb": 64, 00:21:12.770 "state": "configuring", 00:21:12.770 "raid_level": "raid5f", 00:21:12.770 "superblock": true, 00:21:12.770 "num_base_bdevs": 4, 00:21:12.770 "num_base_bdevs_discovered": 3, 00:21:12.770 "num_base_bdevs_operational": 4, 00:21:12.770 "base_bdevs_list": [ 00:21:12.770 { 00:21:12.770 "name": "BaseBdev1", 00:21:12.770 "uuid": "df872e91-30aa-41fb-b6cf-692e276e33a2", 00:21:12.770 "is_configured": true, 00:21:12.770 "data_offset": 2048, 00:21:12.770 "data_size": 63488 00:21:12.770 }, 00:21:12.770 { 00:21:12.770 "name": null, 00:21:12.770 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:12.770 "is_configured": false, 00:21:12.770 "data_offset": 0, 00:21:12.770 "data_size": 63488 00:21:12.770 }, 00:21:12.770 { 00:21:12.770 "name": "BaseBdev3", 00:21:12.770 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:12.770 "is_configured": true, 00:21:12.770 "data_offset": 2048, 00:21:12.770 "data_size": 63488 00:21:12.770 }, 00:21:12.770 { 00:21:12.770 "name": "BaseBdev4", 00:21:12.770 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:12.770 "is_configured": true, 00:21:12.770 "data_offset": 2048, 00:21:12.770 "data_size": 63488 00:21:12.770 } 00:21:12.770 ] 00:21:12.770 }' 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.770 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.338 [2024-10-30 10:49:34.699860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.338 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.597 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.597 "name": "Existed_Raid", 00:21:13.597 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:13.597 "strip_size_kb": 64, 00:21:13.597 "state": "configuring", 00:21:13.597 "raid_level": "raid5f", 00:21:13.597 "superblock": true, 00:21:13.597 "num_base_bdevs": 4, 00:21:13.597 "num_base_bdevs_discovered": 2, 00:21:13.597 "num_base_bdevs_operational": 4, 00:21:13.597 "base_bdevs_list": [ 00:21:13.597 { 00:21:13.597 "name": null, 00:21:13.597 "uuid": "df872e91-30aa-41fb-b6cf-692e276e33a2", 00:21:13.597 "is_configured": false, 00:21:13.597 "data_offset": 0, 00:21:13.597 "data_size": 63488 00:21:13.597 }, 00:21:13.597 { 00:21:13.597 "name": null, 00:21:13.597 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:13.597 "is_configured": false, 00:21:13.597 "data_offset": 0, 00:21:13.597 "data_size": 63488 00:21:13.597 }, 00:21:13.597 { 00:21:13.597 "name": "BaseBdev3", 00:21:13.597 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:13.597 "is_configured": true, 00:21:13.597 "data_offset": 2048, 00:21:13.597 "data_size": 63488 00:21:13.597 }, 00:21:13.597 { 00:21:13.597 "name": "BaseBdev4", 00:21:13.597 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:13.597 "is_configured": true, 00:21:13.597 "data_offset": 2048, 00:21:13.597 "data_size": 63488 00:21:13.597 } 00:21:13.597 ] 00:21:13.597 }' 00:21:13.597 10:49:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.597 10:49:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.856 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.857 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.857 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.857 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:13.857 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.115 [2024-10-30 10:49:35.356806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:14.115 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.116 "name": "Existed_Raid", 00:21:14.116 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:14.116 "strip_size_kb": 64, 00:21:14.116 "state": "configuring", 00:21:14.116 "raid_level": "raid5f", 00:21:14.116 "superblock": true, 00:21:14.116 "num_base_bdevs": 4, 00:21:14.116 "num_base_bdevs_discovered": 3, 00:21:14.116 "num_base_bdevs_operational": 4, 00:21:14.116 "base_bdevs_list": [ 00:21:14.116 { 00:21:14.116 "name": null, 00:21:14.116 "uuid": "df872e91-30aa-41fb-b6cf-692e276e33a2", 00:21:14.116 "is_configured": false, 00:21:14.116 "data_offset": 0, 00:21:14.116 "data_size": 63488 00:21:14.116 }, 00:21:14.116 { 00:21:14.116 "name": "BaseBdev2", 00:21:14.116 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:14.116 "is_configured": true, 00:21:14.116 "data_offset": 2048, 00:21:14.116 "data_size": 63488 00:21:14.116 }, 00:21:14.116 { 00:21:14.116 "name": "BaseBdev3", 00:21:14.116 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:14.116 "is_configured": true, 00:21:14.116 "data_offset": 2048, 00:21:14.116 "data_size": 63488 00:21:14.116 }, 00:21:14.116 { 00:21:14.116 "name": "BaseBdev4", 00:21:14.116 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:14.116 "is_configured": true, 00:21:14.116 "data_offset": 2048, 00:21:14.116 "data_size": 63488 00:21:14.116 } 00:21:14.116 ] 00:21:14.116 }' 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.116 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u df872e91-30aa-41fb-b6cf-692e276e33a2 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.684 10:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 [2024-10-30 10:49:36.020803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:14.684 [2024-10-30 10:49:36.021134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:14.684 [2024-10-30 10:49:36.021153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:14.684 [2024-10-30 10:49:36.021459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:14.684 NewBaseBdev 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 [2024-10-30 10:49:36.027964] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:14.684 [2024-10-30 10:49:36.028012] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:14.684 [2024-10-30 10:49:36.028292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.684 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.684 [ 00:21:14.684 { 00:21:14.684 "name": "NewBaseBdev", 00:21:14.684 "aliases": [ 00:21:14.684 "df872e91-30aa-41fb-b6cf-692e276e33a2" 00:21:14.684 ], 00:21:14.684 "product_name": "Malloc disk", 00:21:14.684 "block_size": 512, 00:21:14.684 "num_blocks": 65536, 00:21:14.684 "uuid": "df872e91-30aa-41fb-b6cf-692e276e33a2", 00:21:14.684 "assigned_rate_limits": { 00:21:14.684 "rw_ios_per_sec": 0, 00:21:14.684 "rw_mbytes_per_sec": 0, 00:21:14.684 "r_mbytes_per_sec": 0, 00:21:14.684 "w_mbytes_per_sec": 0 00:21:14.684 }, 00:21:14.684 "claimed": true, 00:21:14.684 "claim_type": "exclusive_write", 00:21:14.684 "zoned": false, 00:21:14.684 "supported_io_types": { 00:21:14.684 "read": true, 00:21:14.684 "write": true, 00:21:14.684 "unmap": true, 00:21:14.684 "flush": true, 00:21:14.684 "reset": true, 00:21:14.684 "nvme_admin": false, 00:21:14.684 "nvme_io": false, 00:21:14.685 "nvme_io_md": false, 00:21:14.685 "write_zeroes": true, 00:21:14.685 "zcopy": true, 00:21:14.685 "get_zone_info": false, 00:21:14.685 "zone_management": false, 00:21:14.685 "zone_append": false, 00:21:14.685 "compare": false, 00:21:14.685 "compare_and_write": false, 00:21:14.685 "abort": true, 00:21:14.685 "seek_hole": false, 00:21:14.685 "seek_data": false, 00:21:14.685 "copy": true, 00:21:14.685 "nvme_iov_md": false 00:21:14.685 }, 00:21:14.685 "memory_domains": [ 00:21:14.685 { 00:21:14.685 "dma_device_id": "system", 00:21:14.685 "dma_device_type": 1 00:21:14.685 }, 00:21:14.685 { 00:21:14.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.685 "dma_device_type": 2 00:21:14.685 } 00:21:14.685 ], 00:21:14.685 "driver_specific": {} 00:21:14.685 } 00:21:14.685 ] 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.685 "name": "Existed_Raid", 00:21:14.685 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:14.685 "strip_size_kb": 64, 00:21:14.685 "state": "online", 00:21:14.685 "raid_level": "raid5f", 00:21:14.685 "superblock": true, 00:21:14.685 "num_base_bdevs": 4, 00:21:14.685 "num_base_bdevs_discovered": 4, 00:21:14.685 "num_base_bdevs_operational": 4, 00:21:14.685 "base_bdevs_list": [ 00:21:14.685 { 00:21:14.685 "name": "NewBaseBdev", 00:21:14.685 "uuid": "df872e91-30aa-41fb-b6cf-692e276e33a2", 00:21:14.685 "is_configured": true, 00:21:14.685 "data_offset": 2048, 00:21:14.685 "data_size": 63488 00:21:14.685 }, 00:21:14.685 { 00:21:14.685 "name": "BaseBdev2", 00:21:14.685 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:14.685 "is_configured": true, 00:21:14.685 "data_offset": 2048, 00:21:14.685 "data_size": 63488 00:21:14.685 }, 00:21:14.685 { 00:21:14.685 "name": "BaseBdev3", 00:21:14.685 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:14.685 "is_configured": true, 00:21:14.685 "data_offset": 2048, 00:21:14.685 "data_size": 63488 00:21:14.685 }, 00:21:14.685 { 00:21:14.685 "name": "BaseBdev4", 00:21:14.685 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:14.685 "is_configured": true, 00:21:14.685 "data_offset": 2048, 00:21:14.685 "data_size": 63488 00:21:14.685 } 00:21:14.685 ] 00:21:14.685 }' 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.685 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:15.254 [2024-10-30 10:49:36.600127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:15.254 "name": "Existed_Raid", 00:21:15.254 "aliases": [ 00:21:15.254 "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823" 00:21:15.254 ], 00:21:15.254 "product_name": "Raid Volume", 00:21:15.254 "block_size": 512, 00:21:15.254 "num_blocks": 190464, 00:21:15.254 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:15.254 "assigned_rate_limits": { 00:21:15.254 "rw_ios_per_sec": 0, 00:21:15.254 "rw_mbytes_per_sec": 0, 00:21:15.254 "r_mbytes_per_sec": 0, 00:21:15.254 "w_mbytes_per_sec": 0 00:21:15.254 }, 00:21:15.254 "claimed": false, 00:21:15.254 "zoned": false, 00:21:15.254 "supported_io_types": { 00:21:15.254 "read": true, 00:21:15.254 "write": true, 00:21:15.254 "unmap": false, 00:21:15.254 "flush": false, 00:21:15.254 "reset": true, 00:21:15.254 "nvme_admin": false, 00:21:15.254 "nvme_io": false, 00:21:15.254 "nvme_io_md": false, 00:21:15.254 "write_zeroes": true, 00:21:15.254 "zcopy": false, 00:21:15.254 "get_zone_info": false, 00:21:15.254 "zone_management": false, 00:21:15.254 "zone_append": false, 00:21:15.254 "compare": false, 00:21:15.254 "compare_and_write": false, 00:21:15.254 "abort": false, 00:21:15.254 "seek_hole": false, 00:21:15.254 "seek_data": false, 00:21:15.254 "copy": false, 00:21:15.254 "nvme_iov_md": false 00:21:15.254 }, 00:21:15.254 "driver_specific": { 00:21:15.254 "raid": { 00:21:15.254 "uuid": "d382f77e-bd2a-4f6d-8ec3-af1c4bde0823", 00:21:15.254 "strip_size_kb": 64, 00:21:15.254 "state": "online", 00:21:15.254 "raid_level": "raid5f", 00:21:15.254 "superblock": true, 00:21:15.254 "num_base_bdevs": 4, 00:21:15.254 "num_base_bdevs_discovered": 4, 00:21:15.254 "num_base_bdevs_operational": 4, 00:21:15.254 "base_bdevs_list": [ 00:21:15.254 { 00:21:15.254 "name": "NewBaseBdev", 00:21:15.254 "uuid": "df872e91-30aa-41fb-b6cf-692e276e33a2", 00:21:15.254 "is_configured": true, 00:21:15.254 "data_offset": 2048, 00:21:15.254 "data_size": 63488 00:21:15.254 }, 00:21:15.254 { 00:21:15.254 "name": "BaseBdev2", 00:21:15.254 "uuid": "0d42307d-0c68-41e5-aee1-e851aeef6035", 00:21:15.254 "is_configured": true, 00:21:15.254 "data_offset": 2048, 00:21:15.254 "data_size": 63488 00:21:15.254 }, 00:21:15.254 { 00:21:15.254 "name": "BaseBdev3", 00:21:15.254 "uuid": "549d3028-696c-4fbd-8089-3460d2d5471b", 00:21:15.254 "is_configured": true, 00:21:15.254 "data_offset": 2048, 00:21:15.254 "data_size": 63488 00:21:15.254 }, 00:21:15.254 { 00:21:15.254 "name": "BaseBdev4", 00:21:15.254 "uuid": "4e29bfe4-b3d8-492b-ab41-fe2ca47e9869", 00:21:15.254 "is_configured": true, 00:21:15.254 "data_offset": 2048, 00:21:15.254 "data_size": 63488 00:21:15.254 } 00:21:15.254 ] 00:21:15.254 } 00:21:15.254 } 00:21:15.254 }' 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:15.254 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:21:15.254 BaseBdev2 00:21:15.254 BaseBdev3 00:21:15.255 BaseBdev4' 00:21:15.255 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.515 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:15.515 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.516 [2024-10-30 10:49:36.967877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:15.516 [2024-10-30 10:49:36.967931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.516 [2024-10-30 10:49:36.968042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.516 [2024-10-30 10:49:36.968418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.516 [2024-10-30 10:49:36.968445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83991 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83991 ']' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83991 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:15.516 10:49:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83991 00:21:15.775 10:49:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:15.775 10:49:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:15.775 killing process with pid 83991 00:21:15.775 10:49:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83991' 00:21:15.775 10:49:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83991 00:21:15.775 [2024-10-30 10:49:37.010401] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:15.775 10:49:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83991 00:21:16.034 [2024-10-30 10:49:37.351707] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:16.971 10:49:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:16.971 00:21:16.971 real 0m13.013s 00:21:16.971 user 0m21.691s 00:21:16.971 sys 0m1.813s 00:21:16.971 10:49:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.971 ************************************ 00:21:16.971 END TEST raid5f_state_function_test_sb 00:21:16.971 ************************************ 00:21:16.971 10:49:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.229 10:49:38 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:21:17.229 10:49:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:21:17.229 10:49:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:17.229 10:49:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:17.229 ************************************ 00:21:17.229 START TEST raid5f_superblock_test 00:21:17.229 ************************************ 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84668 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84668 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84668 ']' 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:17.229 10:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.229 [2024-10-30 10:49:38.579701] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:21:17.229 [2024-10-30 10:49:38.580005] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84668 ] 00:21:17.488 [2024-10-30 10:49:38.773776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.488 [2024-10-30 10:49:38.927431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.747 [2024-10-30 10:49:39.147654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:17.747 [2024-10-30 10:49:39.147727] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.314 malloc1 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.314 [2024-10-30 10:49:39.573189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:18.314 [2024-10-30 10:49:39.573294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.314 [2024-10-30 10:49:39.573348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:18.314 [2024-10-30 10:49:39.573394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.314 [2024-10-30 10:49:39.576330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.314 [2024-10-30 10:49:39.576383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:18.314 pt1 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.314 malloc2 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.314 [2024-10-30 10:49:39.630288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:18.314 [2024-10-30 10:49:39.630369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.314 [2024-10-30 10:49:39.630418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:18.314 [2024-10-30 10:49:39.630446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.314 [2024-10-30 10:49:39.633330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.314 [2024-10-30 10:49:39.633396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:18.314 pt2 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.314 malloc3 00:21:18.314 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.315 [2024-10-30 10:49:39.702173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:18.315 [2024-10-30 10:49:39.702247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.315 [2024-10-30 10:49:39.702297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:18.315 [2024-10-30 10:49:39.702324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.315 [2024-10-30 10:49:39.705252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.315 [2024-10-30 10:49:39.705306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:18.315 pt3 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.315 malloc4 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.315 [2024-10-30 10:49:39.755558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:18.315 [2024-10-30 10:49:39.755633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.315 [2024-10-30 10:49:39.755679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:18.315 [2024-10-30 10:49:39.755706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.315 [2024-10-30 10:49:39.758820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.315 [2024-10-30 10:49:39.758887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:18.315 pt4 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.315 [2024-10-30 10:49:39.767658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:18.315 [2024-10-30 10:49:39.770384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:18.315 [2024-10-30 10:49:39.770557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:18.315 [2024-10-30 10:49:39.770686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:18.315 [2024-10-30 10:49:39.771058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:18.315 [2024-10-30 10:49:39.771093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:18.315 [2024-10-30 10:49:39.771496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:18.315 [2024-10-30 10:49:39.778638] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:18.315 [2024-10-30 10:49:39.778677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:18.315 [2024-10-30 10:49:39.779014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.315 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.572 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.572 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.572 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.572 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.572 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.572 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.572 "name": "raid_bdev1", 00:21:18.572 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:18.572 "strip_size_kb": 64, 00:21:18.572 "state": "online", 00:21:18.572 "raid_level": "raid5f", 00:21:18.572 "superblock": true, 00:21:18.572 "num_base_bdevs": 4, 00:21:18.572 "num_base_bdevs_discovered": 4, 00:21:18.572 "num_base_bdevs_operational": 4, 00:21:18.572 "base_bdevs_list": [ 00:21:18.572 { 00:21:18.573 "name": "pt1", 00:21:18.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.573 "is_configured": true, 00:21:18.573 "data_offset": 2048, 00:21:18.573 "data_size": 63488 00:21:18.573 }, 00:21:18.573 { 00:21:18.573 "name": "pt2", 00:21:18.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.573 "is_configured": true, 00:21:18.573 "data_offset": 2048, 00:21:18.573 "data_size": 63488 00:21:18.573 }, 00:21:18.573 { 00:21:18.573 "name": "pt3", 00:21:18.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.573 "is_configured": true, 00:21:18.573 "data_offset": 2048, 00:21:18.573 "data_size": 63488 00:21:18.573 }, 00:21:18.573 { 00:21:18.573 "name": "pt4", 00:21:18.573 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:18.573 "is_configured": true, 00:21:18.573 "data_offset": 2048, 00:21:18.573 "data_size": 63488 00:21:18.573 } 00:21:18.573 ] 00:21:18.573 }' 00:21:18.573 10:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.573 10:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.139 [2024-10-30 10:49:40.327589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:19.139 "name": "raid_bdev1", 00:21:19.139 "aliases": [ 00:21:19.139 "08ffd8d9-7770-49e1-8fc6-a33a10f02c26" 00:21:19.139 ], 00:21:19.139 "product_name": "Raid Volume", 00:21:19.139 "block_size": 512, 00:21:19.139 "num_blocks": 190464, 00:21:19.139 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:19.139 "assigned_rate_limits": { 00:21:19.139 "rw_ios_per_sec": 0, 00:21:19.139 "rw_mbytes_per_sec": 0, 00:21:19.139 "r_mbytes_per_sec": 0, 00:21:19.139 "w_mbytes_per_sec": 0 00:21:19.139 }, 00:21:19.139 "claimed": false, 00:21:19.139 "zoned": false, 00:21:19.139 "supported_io_types": { 00:21:19.139 "read": true, 00:21:19.139 "write": true, 00:21:19.139 "unmap": false, 00:21:19.139 "flush": false, 00:21:19.139 "reset": true, 00:21:19.139 "nvme_admin": false, 00:21:19.139 "nvme_io": false, 00:21:19.139 "nvme_io_md": false, 00:21:19.139 "write_zeroes": true, 00:21:19.139 "zcopy": false, 00:21:19.139 "get_zone_info": false, 00:21:19.139 "zone_management": false, 00:21:19.139 "zone_append": false, 00:21:19.139 "compare": false, 00:21:19.139 "compare_and_write": false, 00:21:19.139 "abort": false, 00:21:19.139 "seek_hole": false, 00:21:19.139 "seek_data": false, 00:21:19.139 "copy": false, 00:21:19.139 "nvme_iov_md": false 00:21:19.139 }, 00:21:19.139 "driver_specific": { 00:21:19.139 "raid": { 00:21:19.139 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:19.139 "strip_size_kb": 64, 00:21:19.139 "state": "online", 00:21:19.139 "raid_level": "raid5f", 00:21:19.139 "superblock": true, 00:21:19.139 "num_base_bdevs": 4, 00:21:19.139 "num_base_bdevs_discovered": 4, 00:21:19.139 "num_base_bdevs_operational": 4, 00:21:19.139 "base_bdevs_list": [ 00:21:19.139 { 00:21:19.139 "name": "pt1", 00:21:19.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:19.139 "is_configured": true, 00:21:19.139 "data_offset": 2048, 00:21:19.139 "data_size": 63488 00:21:19.139 }, 00:21:19.139 { 00:21:19.139 "name": "pt2", 00:21:19.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.139 "is_configured": true, 00:21:19.139 "data_offset": 2048, 00:21:19.139 "data_size": 63488 00:21:19.139 }, 00:21:19.139 { 00:21:19.139 "name": "pt3", 00:21:19.139 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.139 "is_configured": true, 00:21:19.139 "data_offset": 2048, 00:21:19.139 "data_size": 63488 00:21:19.139 }, 00:21:19.139 { 00:21:19.139 "name": "pt4", 00:21:19.139 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:19.139 "is_configured": true, 00:21:19.139 "data_offset": 2048, 00:21:19.139 "data_size": 63488 00:21:19.139 } 00:21:19.139 ] 00:21:19.139 } 00:21:19.139 } 00:21:19.139 }' 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:19.139 pt2 00:21:19.139 pt3 00:21:19.139 pt4' 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.139 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.140 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.398 [2024-10-30 10:49:40.699652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=08ffd8d9-7770-49e1-8fc6-a33a10f02c26 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 08ffd8d9-7770-49e1-8fc6-a33a10f02c26 ']' 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.398 [2024-10-30 10:49:40.751424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.398 [2024-10-30 10:49:40.751462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.398 [2024-10-30 10:49:40.751679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.398 [2024-10-30 10:49:40.751836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.398 [2024-10-30 10:49:40.751879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:19.398 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:19.399 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.657 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.657 [2024-10-30 10:49:40.899502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:19.657 [2024-10-30 10:49:40.902013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:19.657 [2024-10-30 10:49:40.902111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:19.657 [2024-10-30 10:49:40.902201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:19.657 [2024-10-30 10:49:40.902314] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:19.658 [2024-10-30 10:49:40.902425] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:19.658 [2024-10-30 10:49:40.902492] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:19.658 [2024-10-30 10:49:40.902554] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:21:19.658 [2024-10-30 10:49:40.902591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.658 [2024-10-30 10:49:40.902613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:19.658 request: 00:21:19.658 { 00:21:19.658 "name": "raid_bdev1", 00:21:19.658 "raid_level": "raid5f", 00:21:19.658 "base_bdevs": [ 00:21:19.658 "malloc1", 00:21:19.658 "malloc2", 00:21:19.658 "malloc3", 00:21:19.658 "malloc4" 00:21:19.658 ], 00:21:19.658 "strip_size_kb": 64, 00:21:19.658 "superblock": false, 00:21:19.658 "method": "bdev_raid_create", 00:21:19.658 "req_id": 1 00:21:19.658 } 00:21:19.658 Got JSON-RPC error response 00:21:19.658 response: 00:21:19.658 { 00:21:19.658 "code": -17, 00:21:19.658 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:19.658 } 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.658 [2024-10-30 10:49:40.967522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:19.658 [2024-10-30 10:49:40.967610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.658 [2024-10-30 10:49:40.967647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:19.658 [2024-10-30 10:49:40.967711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.658 [2024-10-30 10:49:40.970655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.658 [2024-10-30 10:49:40.970726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:19.658 [2024-10-30 10:49:40.970891] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:19.658 [2024-10-30 10:49:40.971012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:19.658 pt1 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.658 10:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.658 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.658 "name": "raid_bdev1", 00:21:19.658 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:19.658 "strip_size_kb": 64, 00:21:19.658 "state": "configuring", 00:21:19.658 "raid_level": "raid5f", 00:21:19.658 "superblock": true, 00:21:19.658 "num_base_bdevs": 4, 00:21:19.658 "num_base_bdevs_discovered": 1, 00:21:19.658 "num_base_bdevs_operational": 4, 00:21:19.658 "base_bdevs_list": [ 00:21:19.658 { 00:21:19.658 "name": "pt1", 00:21:19.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:19.658 "is_configured": true, 00:21:19.658 "data_offset": 2048, 00:21:19.658 "data_size": 63488 00:21:19.658 }, 00:21:19.658 { 00:21:19.658 "name": null, 00:21:19.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.658 "is_configured": false, 00:21:19.658 "data_offset": 2048, 00:21:19.658 "data_size": 63488 00:21:19.658 }, 00:21:19.658 { 00:21:19.658 "name": null, 00:21:19.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.658 "is_configured": false, 00:21:19.658 "data_offset": 2048, 00:21:19.658 "data_size": 63488 00:21:19.658 }, 00:21:19.658 { 00:21:19.658 "name": null, 00:21:19.658 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:19.658 "is_configured": false, 00:21:19.658 "data_offset": 2048, 00:21:19.658 "data_size": 63488 00:21:19.658 } 00:21:19.658 ] 00:21:19.658 }' 00:21:19.658 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.658 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.226 [2024-10-30 10:49:41.495780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:20.226 [2024-10-30 10:49:41.495898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.226 [2024-10-30 10:49:41.495975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:20.226 [2024-10-30 10:49:41.496025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.226 [2024-10-30 10:49:41.496628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.226 [2024-10-30 10:49:41.496688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:20.226 [2024-10-30 10:49:41.496847] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:20.226 [2024-10-30 10:49:41.496907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:20.226 pt2 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.226 [2024-10-30 10:49:41.503753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.226 "name": "raid_bdev1", 00:21:20.226 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:20.226 "strip_size_kb": 64, 00:21:20.226 "state": "configuring", 00:21:20.226 "raid_level": "raid5f", 00:21:20.226 "superblock": true, 00:21:20.226 "num_base_bdevs": 4, 00:21:20.226 "num_base_bdevs_discovered": 1, 00:21:20.226 "num_base_bdevs_operational": 4, 00:21:20.226 "base_bdevs_list": [ 00:21:20.226 { 00:21:20.226 "name": "pt1", 00:21:20.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:20.226 "is_configured": true, 00:21:20.226 "data_offset": 2048, 00:21:20.226 "data_size": 63488 00:21:20.226 }, 00:21:20.226 { 00:21:20.226 "name": null, 00:21:20.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:20.226 "is_configured": false, 00:21:20.226 "data_offset": 0, 00:21:20.226 "data_size": 63488 00:21:20.226 }, 00:21:20.226 { 00:21:20.226 "name": null, 00:21:20.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:20.226 "is_configured": false, 00:21:20.226 "data_offset": 2048, 00:21:20.226 "data_size": 63488 00:21:20.226 }, 00:21:20.226 { 00:21:20.226 "name": null, 00:21:20.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:20.226 "is_configured": false, 00:21:20.226 "data_offset": 2048, 00:21:20.226 "data_size": 63488 00:21:20.226 } 00:21:20.226 ] 00:21:20.226 }' 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.226 10:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.794 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:20.794 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:20.794 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:20.794 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.794 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.794 [2024-10-30 10:49:42.027939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:20.794 [2024-10-30 10:49:42.028057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.794 [2024-10-30 10:49:42.028112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:20.794 [2024-10-30 10:49:42.028146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.794 [2024-10-30 10:49:42.028775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.794 [2024-10-30 10:49:42.028821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:20.794 [2024-10-30 10:49:42.029071] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:20.794 [2024-10-30 10:49:42.029125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:20.794 pt2 00:21:20.794 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.794 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:20.794 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.795 [2024-10-30 10:49:42.039895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:20.795 [2024-10-30 10:49:42.039998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.795 [2024-10-30 10:49:42.040074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:20.795 [2024-10-30 10:49:42.040118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.795 [2024-10-30 10:49:42.040708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.795 [2024-10-30 10:49:42.040788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:20.795 [2024-10-30 10:49:42.040967] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:20.795 [2024-10-30 10:49:42.041034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:20.795 pt3 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.795 [2024-10-30 10:49:42.047867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:20.795 [2024-10-30 10:49:42.047958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.795 [2024-10-30 10:49:42.048035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:20.795 [2024-10-30 10:49:42.048064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.795 [2024-10-30 10:49:42.048631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.795 [2024-10-30 10:49:42.048674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:20.795 [2024-10-30 10:49:42.048826] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:20.795 [2024-10-30 10:49:42.048867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:20.795 [2024-10-30 10:49:42.049149] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:20.795 [2024-10-30 10:49:42.049179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:20.795 [2024-10-30 10:49:42.049545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:20.795 [2024-10-30 10:49:42.055727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:20.795 [2024-10-30 10:49:42.055765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:20.795 [2024-10-30 10:49:42.056125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.795 pt4 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:20.795 "name": "raid_bdev1", 00:21:20.795 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:20.795 "strip_size_kb": 64, 00:21:20.795 "state": "online", 00:21:20.795 "raid_level": "raid5f", 00:21:20.795 "superblock": true, 00:21:20.795 "num_base_bdevs": 4, 00:21:20.795 "num_base_bdevs_discovered": 4, 00:21:20.795 "num_base_bdevs_operational": 4, 00:21:20.795 "base_bdevs_list": [ 00:21:20.795 { 00:21:20.795 "name": "pt1", 00:21:20.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:20.795 "is_configured": true, 00:21:20.795 "data_offset": 2048, 00:21:20.795 "data_size": 63488 00:21:20.795 }, 00:21:20.795 { 00:21:20.795 "name": "pt2", 00:21:20.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:20.795 "is_configured": true, 00:21:20.795 "data_offset": 2048, 00:21:20.795 "data_size": 63488 00:21:20.795 }, 00:21:20.795 { 00:21:20.795 "name": "pt3", 00:21:20.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:20.795 "is_configured": true, 00:21:20.795 "data_offset": 2048, 00:21:20.795 "data_size": 63488 00:21:20.795 }, 00:21:20.795 { 00:21:20.795 "name": "pt4", 00:21:20.795 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:20.795 "is_configured": true, 00:21:20.795 "data_offset": 2048, 00:21:20.795 "data_size": 63488 00:21:20.795 } 00:21:20.795 ] 00:21:20.795 }' 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:20.795 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.363 [2024-10-30 10:49:42.589062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.363 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:21.363 "name": "raid_bdev1", 00:21:21.363 "aliases": [ 00:21:21.363 "08ffd8d9-7770-49e1-8fc6-a33a10f02c26" 00:21:21.363 ], 00:21:21.363 "product_name": "Raid Volume", 00:21:21.363 "block_size": 512, 00:21:21.363 "num_blocks": 190464, 00:21:21.363 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:21.363 "assigned_rate_limits": { 00:21:21.363 "rw_ios_per_sec": 0, 00:21:21.363 "rw_mbytes_per_sec": 0, 00:21:21.363 "r_mbytes_per_sec": 0, 00:21:21.363 "w_mbytes_per_sec": 0 00:21:21.363 }, 00:21:21.363 "claimed": false, 00:21:21.363 "zoned": false, 00:21:21.363 "supported_io_types": { 00:21:21.363 "read": true, 00:21:21.363 "write": true, 00:21:21.363 "unmap": false, 00:21:21.363 "flush": false, 00:21:21.363 "reset": true, 00:21:21.363 "nvme_admin": false, 00:21:21.363 "nvme_io": false, 00:21:21.363 "nvme_io_md": false, 00:21:21.363 "write_zeroes": true, 00:21:21.363 "zcopy": false, 00:21:21.363 "get_zone_info": false, 00:21:21.363 "zone_management": false, 00:21:21.363 "zone_append": false, 00:21:21.363 "compare": false, 00:21:21.363 "compare_and_write": false, 00:21:21.363 "abort": false, 00:21:21.363 "seek_hole": false, 00:21:21.363 "seek_data": false, 00:21:21.363 "copy": false, 00:21:21.363 "nvme_iov_md": false 00:21:21.363 }, 00:21:21.363 "driver_specific": { 00:21:21.363 "raid": { 00:21:21.363 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:21.363 "strip_size_kb": 64, 00:21:21.363 "state": "online", 00:21:21.363 "raid_level": "raid5f", 00:21:21.363 "superblock": true, 00:21:21.363 "num_base_bdevs": 4, 00:21:21.363 "num_base_bdevs_discovered": 4, 00:21:21.363 "num_base_bdevs_operational": 4, 00:21:21.363 "base_bdevs_list": [ 00:21:21.363 { 00:21:21.363 "name": "pt1", 00:21:21.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:21.363 "is_configured": true, 00:21:21.363 "data_offset": 2048, 00:21:21.363 "data_size": 63488 00:21:21.363 }, 00:21:21.363 { 00:21:21.363 "name": "pt2", 00:21:21.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:21.363 "is_configured": true, 00:21:21.363 "data_offset": 2048, 00:21:21.363 "data_size": 63488 00:21:21.364 }, 00:21:21.364 { 00:21:21.364 "name": "pt3", 00:21:21.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:21.364 "is_configured": true, 00:21:21.364 "data_offset": 2048, 00:21:21.364 "data_size": 63488 00:21:21.364 }, 00:21:21.364 { 00:21:21.364 "name": "pt4", 00:21:21.364 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:21.364 "is_configured": true, 00:21:21.364 "data_offset": 2048, 00:21:21.364 "data_size": 63488 00:21:21.364 } 00:21:21.364 ] 00:21:21.364 } 00:21:21.364 } 00:21:21.364 }' 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:21.364 pt2 00:21:21.364 pt3 00:21:21.364 pt4' 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.364 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:21.623 [2024-10-30 10:49:42.965079] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.623 10:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 08ffd8d9-7770-49e1-8fc6-a33a10f02c26 '!=' 08ffd8d9-7770-49e1-8fc6-a33a10f02c26 ']' 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.623 [2024-10-30 10:49:43.016898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:21.623 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.624 "name": "raid_bdev1", 00:21:21.624 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:21.624 "strip_size_kb": 64, 00:21:21.624 "state": "online", 00:21:21.624 "raid_level": "raid5f", 00:21:21.624 "superblock": true, 00:21:21.624 "num_base_bdevs": 4, 00:21:21.624 "num_base_bdevs_discovered": 3, 00:21:21.624 "num_base_bdevs_operational": 3, 00:21:21.624 "base_bdevs_list": [ 00:21:21.624 { 00:21:21.624 "name": null, 00:21:21.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.624 "is_configured": false, 00:21:21.624 "data_offset": 0, 00:21:21.624 "data_size": 63488 00:21:21.624 }, 00:21:21.624 { 00:21:21.624 "name": "pt2", 00:21:21.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:21.624 "is_configured": true, 00:21:21.624 "data_offset": 2048, 00:21:21.624 "data_size": 63488 00:21:21.624 }, 00:21:21.624 { 00:21:21.624 "name": "pt3", 00:21:21.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:21.624 "is_configured": true, 00:21:21.624 "data_offset": 2048, 00:21:21.624 "data_size": 63488 00:21:21.624 }, 00:21:21.624 { 00:21:21.624 "name": "pt4", 00:21:21.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:21.624 "is_configured": true, 00:21:21.624 "data_offset": 2048, 00:21:21.624 "data_size": 63488 00:21:21.624 } 00:21:21.624 ] 00:21:21.624 }' 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.624 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.189 [2024-10-30 10:49:43.597073] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:22.189 [2024-10-30 10:49:43.597142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:22.189 [2024-10-30 10:49:43.597244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.189 [2024-10-30 10:49:43.597374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.189 [2024-10-30 10:49:43.597391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.189 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.448 [2024-10-30 10:49:43.681067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:22.448 [2024-10-30 10:49:43.681147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.448 [2024-10-30 10:49:43.681173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:22.448 [2024-10-30 10:49:43.681187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.448 [2024-10-30 10:49:43.684162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.448 [2024-10-30 10:49:43.684222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:22.448 [2024-10-30 10:49:43.684323] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:22.448 [2024-10-30 10:49:43.684396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:22.448 pt2 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.448 "name": "raid_bdev1", 00:21:22.448 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:22.448 "strip_size_kb": 64, 00:21:22.448 "state": "configuring", 00:21:22.448 "raid_level": "raid5f", 00:21:22.448 "superblock": true, 00:21:22.448 "num_base_bdevs": 4, 00:21:22.448 "num_base_bdevs_discovered": 1, 00:21:22.448 "num_base_bdevs_operational": 3, 00:21:22.448 "base_bdevs_list": [ 00:21:22.448 { 00:21:22.448 "name": null, 00:21:22.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.448 "is_configured": false, 00:21:22.448 "data_offset": 2048, 00:21:22.448 "data_size": 63488 00:21:22.448 }, 00:21:22.448 { 00:21:22.448 "name": "pt2", 00:21:22.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.448 "is_configured": true, 00:21:22.448 "data_offset": 2048, 00:21:22.448 "data_size": 63488 00:21:22.448 }, 00:21:22.448 { 00:21:22.448 "name": null, 00:21:22.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:22.448 "is_configured": false, 00:21:22.448 "data_offset": 2048, 00:21:22.448 "data_size": 63488 00:21:22.448 }, 00:21:22.448 { 00:21:22.448 "name": null, 00:21:22.448 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:22.448 "is_configured": false, 00:21:22.448 "data_offset": 2048, 00:21:22.448 "data_size": 63488 00:21:22.448 } 00:21:22.448 ] 00:21:22.448 }' 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.448 10:49:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.017 [2024-10-30 10:49:44.209290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:23.017 [2024-10-30 10:49:44.209377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.017 [2024-10-30 10:49:44.209409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:23.017 [2024-10-30 10:49:44.209424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.017 [2024-10-30 10:49:44.209987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.017 [2024-10-30 10:49:44.210035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:23.017 [2024-10-30 10:49:44.210151] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:23.017 [2024-10-30 10:49:44.210190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:23.017 pt3 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.017 "name": "raid_bdev1", 00:21:23.017 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:23.017 "strip_size_kb": 64, 00:21:23.017 "state": "configuring", 00:21:23.017 "raid_level": "raid5f", 00:21:23.017 "superblock": true, 00:21:23.017 "num_base_bdevs": 4, 00:21:23.017 "num_base_bdevs_discovered": 2, 00:21:23.017 "num_base_bdevs_operational": 3, 00:21:23.017 "base_bdevs_list": [ 00:21:23.017 { 00:21:23.017 "name": null, 00:21:23.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.017 "is_configured": false, 00:21:23.017 "data_offset": 2048, 00:21:23.017 "data_size": 63488 00:21:23.017 }, 00:21:23.017 { 00:21:23.017 "name": "pt2", 00:21:23.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.017 "is_configured": true, 00:21:23.017 "data_offset": 2048, 00:21:23.017 "data_size": 63488 00:21:23.017 }, 00:21:23.017 { 00:21:23.017 "name": "pt3", 00:21:23.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:23.017 "is_configured": true, 00:21:23.017 "data_offset": 2048, 00:21:23.017 "data_size": 63488 00:21:23.017 }, 00:21:23.017 { 00:21:23.017 "name": null, 00:21:23.017 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:23.017 "is_configured": false, 00:21:23.017 "data_offset": 2048, 00:21:23.017 "data_size": 63488 00:21:23.017 } 00:21:23.017 ] 00:21:23.017 }' 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.017 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.277 [2024-10-30 10:49:44.701455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:23.277 [2024-10-30 10:49:44.701569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.277 [2024-10-30 10:49:44.701601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:23.277 [2024-10-30 10:49:44.701616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.277 [2024-10-30 10:49:44.702187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.277 [2024-10-30 10:49:44.702222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:23.277 [2024-10-30 10:49:44.702326] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:23.277 [2024-10-30 10:49:44.702357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:23.277 [2024-10-30 10:49:44.702523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:23.277 [2024-10-30 10:49:44.702548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:23.277 [2024-10-30 10:49:44.702860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:23.277 [2024-10-30 10:49:44.709749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:23.277 [2024-10-30 10:49:44.709784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:23.277 [2024-10-30 10:49:44.710156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.277 pt4 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.277 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.536 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.536 "name": "raid_bdev1", 00:21:23.536 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:23.536 "strip_size_kb": 64, 00:21:23.536 "state": "online", 00:21:23.536 "raid_level": "raid5f", 00:21:23.536 "superblock": true, 00:21:23.536 "num_base_bdevs": 4, 00:21:23.536 "num_base_bdevs_discovered": 3, 00:21:23.536 "num_base_bdevs_operational": 3, 00:21:23.536 "base_bdevs_list": [ 00:21:23.536 { 00:21:23.536 "name": null, 00:21:23.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.536 "is_configured": false, 00:21:23.536 "data_offset": 2048, 00:21:23.536 "data_size": 63488 00:21:23.536 }, 00:21:23.536 { 00:21:23.536 "name": "pt2", 00:21:23.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.536 "is_configured": true, 00:21:23.536 "data_offset": 2048, 00:21:23.536 "data_size": 63488 00:21:23.536 }, 00:21:23.536 { 00:21:23.536 "name": "pt3", 00:21:23.536 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:23.536 "is_configured": true, 00:21:23.536 "data_offset": 2048, 00:21:23.536 "data_size": 63488 00:21:23.536 }, 00:21:23.536 { 00:21:23.536 "name": "pt4", 00:21:23.536 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:23.536 "is_configured": true, 00:21:23.536 "data_offset": 2048, 00:21:23.536 "data_size": 63488 00:21:23.536 } 00:21:23.536 ] 00:21:23.536 }' 00:21:23.536 10:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.536 10:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.795 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:23.795 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.795 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.795 [2024-10-30 10:49:45.217647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.795 [2024-10-30 10:49:45.217707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:23.795 [2024-10-30 10:49:45.217795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.795 [2024-10-30 10:49:45.217945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:23.795 [2024-10-30 10:49:45.218004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:23.795 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.795 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.795 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.795 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.795 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:23.795 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.054 [2024-10-30 10:49:45.289597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:24.054 [2024-10-30 10:49:45.289664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.054 [2024-10-30 10:49:45.289696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:24.054 [2024-10-30 10:49:45.289712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.054 [2024-10-30 10:49:45.292752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.054 [2024-10-30 10:49:45.292799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:24.054 [2024-10-30 10:49:45.292896] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:24.054 [2024-10-30 10:49:45.292965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:24.054 [2024-10-30 10:49:45.293157] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:24.054 [2024-10-30 10:49:45.293190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.054 [2024-10-30 10:49:45.293212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:24.054 [2024-10-30 10:49:45.293282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:24.054 [2024-10-30 10:49:45.293454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:24.054 pt1 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.054 "name": "raid_bdev1", 00:21:24.054 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:24.054 "strip_size_kb": 64, 00:21:24.054 "state": "configuring", 00:21:24.054 "raid_level": "raid5f", 00:21:24.054 "superblock": true, 00:21:24.054 "num_base_bdevs": 4, 00:21:24.054 "num_base_bdevs_discovered": 2, 00:21:24.054 "num_base_bdevs_operational": 3, 00:21:24.054 "base_bdevs_list": [ 00:21:24.054 { 00:21:24.054 "name": null, 00:21:24.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.054 "is_configured": false, 00:21:24.054 "data_offset": 2048, 00:21:24.054 "data_size": 63488 00:21:24.054 }, 00:21:24.054 { 00:21:24.054 "name": "pt2", 00:21:24.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.054 "is_configured": true, 00:21:24.054 "data_offset": 2048, 00:21:24.054 "data_size": 63488 00:21:24.054 }, 00:21:24.054 { 00:21:24.054 "name": "pt3", 00:21:24.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:24.054 "is_configured": true, 00:21:24.054 "data_offset": 2048, 00:21:24.054 "data_size": 63488 00:21:24.054 }, 00:21:24.054 { 00:21:24.054 "name": null, 00:21:24.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:24.054 "is_configured": false, 00:21:24.054 "data_offset": 2048, 00:21:24.054 "data_size": 63488 00:21:24.054 } 00:21:24.054 ] 00:21:24.054 }' 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.054 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.621 [2024-10-30 10:49:45.873835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:24.621 [2024-10-30 10:49:45.873920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.621 [2024-10-30 10:49:45.873953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:24.621 [2024-10-30 10:49:45.873967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.621 [2024-10-30 10:49:45.874530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.621 [2024-10-30 10:49:45.874564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:24.621 [2024-10-30 10:49:45.874668] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:21:24.621 [2024-10-30 10:49:45.874707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:24.621 [2024-10-30 10:49:45.874879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:24.621 [2024-10-30 10:49:45.874904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:24.621 [2024-10-30 10:49:45.875236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:24.621 [2024-10-30 10:49:45.882045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:24.621 [2024-10-30 10:49:45.882091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:24.621 [2024-10-30 10:49:45.882407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.621 pt4 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.621 "name": "raid_bdev1", 00:21:24.621 "uuid": "08ffd8d9-7770-49e1-8fc6-a33a10f02c26", 00:21:24.621 "strip_size_kb": 64, 00:21:24.621 "state": "online", 00:21:24.621 "raid_level": "raid5f", 00:21:24.621 "superblock": true, 00:21:24.621 "num_base_bdevs": 4, 00:21:24.621 "num_base_bdevs_discovered": 3, 00:21:24.621 "num_base_bdevs_operational": 3, 00:21:24.621 "base_bdevs_list": [ 00:21:24.621 { 00:21:24.621 "name": null, 00:21:24.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.621 "is_configured": false, 00:21:24.621 "data_offset": 2048, 00:21:24.621 "data_size": 63488 00:21:24.621 }, 00:21:24.621 { 00:21:24.621 "name": "pt2", 00:21:24.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.621 "is_configured": true, 00:21:24.621 "data_offset": 2048, 00:21:24.621 "data_size": 63488 00:21:24.621 }, 00:21:24.621 { 00:21:24.621 "name": "pt3", 00:21:24.621 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:24.621 "is_configured": true, 00:21:24.621 "data_offset": 2048, 00:21:24.621 "data_size": 63488 00:21:24.621 }, 00:21:24.621 { 00:21:24.621 "name": "pt4", 00:21:24.621 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:24.621 "is_configured": true, 00:21:24.621 "data_offset": 2048, 00:21:24.621 "data_size": 63488 00:21:24.621 } 00:21:24.621 ] 00:21:24.621 }' 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.621 10:49:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.188 10:49:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:25.188 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.188 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:25.189 [2024-10-30 10:49:46.506205] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 08ffd8d9-7770-49e1-8fc6-a33a10f02c26 '!=' 08ffd8d9-7770-49e1-8fc6-a33a10f02c26 ']' 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84668 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84668 ']' 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84668 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84668 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:25.189 killing process with pid 84668 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84668' 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84668 00:21:25.189 [2024-10-30 10:49:46.583533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.189 10:49:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84668 00:21:25.189 [2024-10-30 10:49:46.583667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.189 [2024-10-30 10:49:46.583769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.189 [2024-10-30 10:49:46.583788] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:25.755 [2024-10-30 10:49:46.941261] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:26.692 10:49:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:26.692 00:21:26.692 real 0m9.536s 00:21:26.692 user 0m15.640s 00:21:26.692 sys 0m1.430s 00:21:26.692 10:49:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:26.692 10:49:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.692 ************************************ 00:21:26.692 END TEST raid5f_superblock_test 00:21:26.692 ************************************ 00:21:26.692 10:49:48 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:21:26.692 10:49:48 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:21:26.692 10:49:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:26.692 10:49:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:26.692 10:49:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:26.692 ************************************ 00:21:26.692 START TEST raid5f_rebuild_test 00:21:26.692 ************************************ 00:21:26.692 10:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:21:26.692 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:26.692 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:26.692 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:26.692 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:26.692 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:26.692 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:26.692 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:26.692 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85165 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85165 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 85165 ']' 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:26.693 10:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.951 [2024-10-30 10:49:48.189500] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:21:26.951 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:26.951 Zero copy mechanism will not be used. 00:21:26.951 [2024-10-30 10:49:48.189778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85165 ] 00:21:26.951 [2024-10-30 10:49:48.388956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.210 [2024-10-30 10:49:48.547643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.469 [2024-10-30 10:49:48.777304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.469 [2024-10-30 10:49:48.777388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.728 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:27.728 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:21:27.728 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:27.728 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:27.728 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.728 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 BaseBdev1_malloc 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 [2024-10-30 10:49:49.219556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:27.987 [2024-10-30 10:49:49.219669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.987 [2024-10-30 10:49:49.219702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:27.987 [2024-10-30 10:49:49.219721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.987 [2024-10-30 10:49:49.222436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.987 [2024-10-30 10:49:49.222500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:27.987 BaseBdev1 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 BaseBdev2_malloc 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 [2024-10-30 10:49:49.273138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:27.987 [2024-10-30 10:49:49.273216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.987 [2024-10-30 10:49:49.273246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:27.987 [2024-10-30 10:49:49.273267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.987 [2024-10-30 10:49:49.275981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.987 [2024-10-30 10:49:49.276064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:27.987 BaseBdev2 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 BaseBdev3_malloc 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.987 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.987 [2024-10-30 10:49:49.339767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:27.987 [2024-10-30 10:49:49.339850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.987 [2024-10-30 10:49:49.339881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:27.987 [2024-10-30 10:49:49.339899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.988 [2024-10-30 10:49:49.342851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.988 [2024-10-30 10:49:49.342916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:27.988 BaseBdev3 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.988 BaseBdev4_malloc 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.988 [2024-10-30 10:49:49.391847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:27.988 [2024-10-30 10:49:49.391930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.988 [2024-10-30 10:49:49.391957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:27.988 [2024-10-30 10:49:49.392003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.988 [2024-10-30 10:49:49.394699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.988 [2024-10-30 10:49:49.394764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:27.988 BaseBdev4 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.988 spare_malloc 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.988 spare_delay 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.988 [2024-10-30 10:49:49.451473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:27.988 [2024-10-30 10:49:49.451583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.988 [2024-10-30 10:49:49.451610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:27.988 [2024-10-30 10:49:49.451627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.988 [2024-10-30 10:49:49.454560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.988 [2024-10-30 10:49:49.454619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:27.988 spare 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:27.988 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.247 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.247 [2024-10-30 10:49:49.459627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.247 [2024-10-30 10:49:49.462218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.247 [2024-10-30 10:49:49.462319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:28.247 [2024-10-30 10:49:49.462413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:28.247 [2024-10-30 10:49:49.462573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:28.247 [2024-10-30 10:49:49.462595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:28.247 [2024-10-30 10:49:49.462928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:28.248 [2024-10-30 10:49:49.469564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:28.248 [2024-10-30 10:49:49.469596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:28.248 [2024-10-30 10:49:49.469838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.248 "name": "raid_bdev1", 00:21:28.248 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:28.248 "strip_size_kb": 64, 00:21:28.248 "state": "online", 00:21:28.248 "raid_level": "raid5f", 00:21:28.248 "superblock": false, 00:21:28.248 "num_base_bdevs": 4, 00:21:28.248 "num_base_bdevs_discovered": 4, 00:21:28.248 "num_base_bdevs_operational": 4, 00:21:28.248 "base_bdevs_list": [ 00:21:28.248 { 00:21:28.248 "name": "BaseBdev1", 00:21:28.248 "uuid": "293f36a2-6b97-5c30-8a79-9fc03b8c0a32", 00:21:28.248 "is_configured": true, 00:21:28.248 "data_offset": 0, 00:21:28.248 "data_size": 65536 00:21:28.248 }, 00:21:28.248 { 00:21:28.248 "name": "BaseBdev2", 00:21:28.248 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:28.248 "is_configured": true, 00:21:28.248 "data_offset": 0, 00:21:28.248 "data_size": 65536 00:21:28.248 }, 00:21:28.248 { 00:21:28.248 "name": "BaseBdev3", 00:21:28.248 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:28.248 "is_configured": true, 00:21:28.248 "data_offset": 0, 00:21:28.248 "data_size": 65536 00:21:28.248 }, 00:21:28.248 { 00:21:28.248 "name": "BaseBdev4", 00:21:28.248 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:28.248 "is_configured": true, 00:21:28.248 "data_offset": 0, 00:21:28.248 "data_size": 65536 00:21:28.248 } 00:21:28.248 ] 00:21:28.248 }' 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.248 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.507 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:28.507 10:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:28.507 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.507 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.507 [2024-10-30 10:49:49.973666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:28.766 10:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:28.766 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:29.024 [2024-10-30 10:49:50.377509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:29.024 /dev/nbd0 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:29.024 1+0 records in 00:21:29.024 1+0 records out 00:21:29.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028222 s, 14.5 MB/s 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:29.024 10:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:21:29.958 512+0 records in 00:21:29.958 512+0 records out 00:21:29.958 100663296 bytes (101 MB, 96 MiB) copied, 0.748722 s, 134 MB/s 00:21:29.958 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:29.958 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:29.958 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:29.958 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.958 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:29.958 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.958 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:30.218 [2024-10-30 10:49:51.459526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.218 [2024-10-30 10:49:51.471163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.218 "name": "raid_bdev1", 00:21:30.218 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:30.218 "strip_size_kb": 64, 00:21:30.218 "state": "online", 00:21:30.218 "raid_level": "raid5f", 00:21:30.218 "superblock": false, 00:21:30.218 "num_base_bdevs": 4, 00:21:30.218 "num_base_bdevs_discovered": 3, 00:21:30.218 "num_base_bdevs_operational": 3, 00:21:30.218 "base_bdevs_list": [ 00:21:30.218 { 00:21:30.218 "name": null, 00:21:30.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.218 "is_configured": false, 00:21:30.218 "data_offset": 0, 00:21:30.218 "data_size": 65536 00:21:30.218 }, 00:21:30.218 { 00:21:30.218 "name": "BaseBdev2", 00:21:30.218 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:30.218 "is_configured": true, 00:21:30.218 "data_offset": 0, 00:21:30.218 "data_size": 65536 00:21:30.218 }, 00:21:30.218 { 00:21:30.218 "name": "BaseBdev3", 00:21:30.218 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:30.218 "is_configured": true, 00:21:30.218 "data_offset": 0, 00:21:30.218 "data_size": 65536 00:21:30.218 }, 00:21:30.218 { 00:21:30.218 "name": "BaseBdev4", 00:21:30.218 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:30.218 "is_configured": true, 00:21:30.218 "data_offset": 0, 00:21:30.218 "data_size": 65536 00:21:30.218 } 00:21:30.218 ] 00:21:30.218 }' 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.218 10:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.786 10:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:30.786 10:49:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.786 10:49:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.786 [2024-10-30 10:49:52.011374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:30.786 [2024-10-30 10:49:52.026342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:21:30.786 10:49:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.786 10:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:30.786 [2024-10-30 10:49:52.036174] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.723 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.723 "name": "raid_bdev1", 00:21:31.723 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:31.723 "strip_size_kb": 64, 00:21:31.723 "state": "online", 00:21:31.723 "raid_level": "raid5f", 00:21:31.723 "superblock": false, 00:21:31.723 "num_base_bdevs": 4, 00:21:31.723 "num_base_bdevs_discovered": 4, 00:21:31.723 "num_base_bdevs_operational": 4, 00:21:31.723 "process": { 00:21:31.723 "type": "rebuild", 00:21:31.723 "target": "spare", 00:21:31.723 "progress": { 00:21:31.723 "blocks": 17280, 00:21:31.723 "percent": 8 00:21:31.723 } 00:21:31.723 }, 00:21:31.723 "base_bdevs_list": [ 00:21:31.723 { 00:21:31.723 "name": "spare", 00:21:31.723 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:31.723 "is_configured": true, 00:21:31.723 "data_offset": 0, 00:21:31.723 "data_size": 65536 00:21:31.723 }, 00:21:31.723 { 00:21:31.723 "name": "BaseBdev2", 00:21:31.723 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:31.723 "is_configured": true, 00:21:31.723 "data_offset": 0, 00:21:31.723 "data_size": 65536 00:21:31.723 }, 00:21:31.723 { 00:21:31.723 "name": "BaseBdev3", 00:21:31.723 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:31.723 "is_configured": true, 00:21:31.723 "data_offset": 0, 00:21:31.724 "data_size": 65536 00:21:31.724 }, 00:21:31.724 { 00:21:31.724 "name": "BaseBdev4", 00:21:31.724 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:31.724 "is_configured": true, 00:21:31.724 "data_offset": 0, 00:21:31.724 "data_size": 65536 00:21:31.724 } 00:21:31.724 ] 00:21:31.724 }' 00:21:31.724 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.724 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.724 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.724 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.724 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:31.724 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.724 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.983 [2024-10-30 10:49:53.193584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:31.983 [2024-10-30 10:49:53.247459] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:31.983 [2024-10-30 10:49:53.247546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.983 [2024-10-30 10:49:53.247574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:31.983 [2024-10-30 10:49:53.247590] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.983 "name": "raid_bdev1", 00:21:31.983 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:31.983 "strip_size_kb": 64, 00:21:31.983 "state": "online", 00:21:31.983 "raid_level": "raid5f", 00:21:31.983 "superblock": false, 00:21:31.983 "num_base_bdevs": 4, 00:21:31.983 "num_base_bdevs_discovered": 3, 00:21:31.983 "num_base_bdevs_operational": 3, 00:21:31.983 "base_bdevs_list": [ 00:21:31.983 { 00:21:31.983 "name": null, 00:21:31.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.983 "is_configured": false, 00:21:31.983 "data_offset": 0, 00:21:31.983 "data_size": 65536 00:21:31.983 }, 00:21:31.983 { 00:21:31.983 "name": "BaseBdev2", 00:21:31.983 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:31.983 "is_configured": true, 00:21:31.983 "data_offset": 0, 00:21:31.983 "data_size": 65536 00:21:31.983 }, 00:21:31.983 { 00:21:31.983 "name": "BaseBdev3", 00:21:31.983 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:31.983 "is_configured": true, 00:21:31.983 "data_offset": 0, 00:21:31.983 "data_size": 65536 00:21:31.983 }, 00:21:31.983 { 00:21:31.983 "name": "BaseBdev4", 00:21:31.983 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:31.983 "is_configured": true, 00:21:31.983 "data_offset": 0, 00:21:31.983 "data_size": 65536 00:21:31.983 } 00:21:31.983 ] 00:21:31.983 }' 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.983 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.552 "name": "raid_bdev1", 00:21:32.552 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:32.552 "strip_size_kb": 64, 00:21:32.552 "state": "online", 00:21:32.552 "raid_level": "raid5f", 00:21:32.552 "superblock": false, 00:21:32.552 "num_base_bdevs": 4, 00:21:32.552 "num_base_bdevs_discovered": 3, 00:21:32.552 "num_base_bdevs_operational": 3, 00:21:32.552 "base_bdevs_list": [ 00:21:32.552 { 00:21:32.552 "name": null, 00:21:32.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.552 "is_configured": false, 00:21:32.552 "data_offset": 0, 00:21:32.552 "data_size": 65536 00:21:32.552 }, 00:21:32.552 { 00:21:32.552 "name": "BaseBdev2", 00:21:32.552 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:32.552 "is_configured": true, 00:21:32.552 "data_offset": 0, 00:21:32.552 "data_size": 65536 00:21:32.552 }, 00:21:32.552 { 00:21:32.552 "name": "BaseBdev3", 00:21:32.552 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:32.552 "is_configured": true, 00:21:32.552 "data_offset": 0, 00:21:32.552 "data_size": 65536 00:21:32.552 }, 00:21:32.552 { 00:21:32.552 "name": "BaseBdev4", 00:21:32.552 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:32.552 "is_configured": true, 00:21:32.552 "data_offset": 0, 00:21:32.552 "data_size": 65536 00:21:32.552 } 00:21:32.552 ] 00:21:32.552 }' 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.552 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:32.553 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:32.553 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.553 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.553 [2024-10-30 10:49:53.980854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:32.553 [2024-10-30 10:49:53.994839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:21:32.553 10:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.553 10:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:32.553 [2024-10-30 10:49:54.003699] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:33.932 10:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.932 10:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.932 10:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.932 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.932 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.932 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.932 10:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.932 10:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.932 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.932 10:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.932 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.932 "name": "raid_bdev1", 00:21:33.932 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:33.932 "strip_size_kb": 64, 00:21:33.932 "state": "online", 00:21:33.932 "raid_level": "raid5f", 00:21:33.932 "superblock": false, 00:21:33.932 "num_base_bdevs": 4, 00:21:33.932 "num_base_bdevs_discovered": 4, 00:21:33.932 "num_base_bdevs_operational": 4, 00:21:33.932 "process": { 00:21:33.932 "type": "rebuild", 00:21:33.932 "target": "spare", 00:21:33.932 "progress": { 00:21:33.932 "blocks": 17280, 00:21:33.932 "percent": 8 00:21:33.932 } 00:21:33.932 }, 00:21:33.932 "base_bdevs_list": [ 00:21:33.932 { 00:21:33.932 "name": "spare", 00:21:33.932 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:33.932 "is_configured": true, 00:21:33.932 "data_offset": 0, 00:21:33.932 "data_size": 65536 00:21:33.932 }, 00:21:33.932 { 00:21:33.932 "name": "BaseBdev2", 00:21:33.932 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:33.932 "is_configured": true, 00:21:33.932 "data_offset": 0, 00:21:33.932 "data_size": 65536 00:21:33.932 }, 00:21:33.932 { 00:21:33.932 "name": "BaseBdev3", 00:21:33.932 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:33.932 "is_configured": true, 00:21:33.932 "data_offset": 0, 00:21:33.932 "data_size": 65536 00:21:33.932 }, 00:21:33.932 { 00:21:33.932 "name": "BaseBdev4", 00:21:33.932 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:33.932 "is_configured": true, 00:21:33.932 "data_offset": 0, 00:21:33.932 "data_size": 65536 00:21:33.932 } 00:21:33.932 ] 00:21:33.932 }' 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=669 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.933 "name": "raid_bdev1", 00:21:33.933 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:33.933 "strip_size_kb": 64, 00:21:33.933 "state": "online", 00:21:33.933 "raid_level": "raid5f", 00:21:33.933 "superblock": false, 00:21:33.933 "num_base_bdevs": 4, 00:21:33.933 "num_base_bdevs_discovered": 4, 00:21:33.933 "num_base_bdevs_operational": 4, 00:21:33.933 "process": { 00:21:33.933 "type": "rebuild", 00:21:33.933 "target": "spare", 00:21:33.933 "progress": { 00:21:33.933 "blocks": 21120, 00:21:33.933 "percent": 10 00:21:33.933 } 00:21:33.933 }, 00:21:33.933 "base_bdevs_list": [ 00:21:33.933 { 00:21:33.933 "name": "spare", 00:21:33.933 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:33.933 "is_configured": true, 00:21:33.933 "data_offset": 0, 00:21:33.933 "data_size": 65536 00:21:33.933 }, 00:21:33.933 { 00:21:33.933 "name": "BaseBdev2", 00:21:33.933 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:33.933 "is_configured": true, 00:21:33.933 "data_offset": 0, 00:21:33.933 "data_size": 65536 00:21:33.933 }, 00:21:33.933 { 00:21:33.933 "name": "BaseBdev3", 00:21:33.933 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:33.933 "is_configured": true, 00:21:33.933 "data_offset": 0, 00:21:33.933 "data_size": 65536 00:21:33.933 }, 00:21:33.933 { 00:21:33.933 "name": "BaseBdev4", 00:21:33.933 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:33.933 "is_configured": true, 00:21:33.933 "data_offset": 0, 00:21:33.933 "data_size": 65536 00:21:33.933 } 00:21:33.933 ] 00:21:33.933 }' 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.933 10:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.894 10:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.153 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.153 "name": "raid_bdev1", 00:21:35.153 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:35.153 "strip_size_kb": 64, 00:21:35.153 "state": "online", 00:21:35.153 "raid_level": "raid5f", 00:21:35.153 "superblock": false, 00:21:35.153 "num_base_bdevs": 4, 00:21:35.153 "num_base_bdevs_discovered": 4, 00:21:35.153 "num_base_bdevs_operational": 4, 00:21:35.153 "process": { 00:21:35.153 "type": "rebuild", 00:21:35.153 "target": "spare", 00:21:35.153 "progress": { 00:21:35.153 "blocks": 44160, 00:21:35.153 "percent": 22 00:21:35.153 } 00:21:35.153 }, 00:21:35.153 "base_bdevs_list": [ 00:21:35.153 { 00:21:35.153 "name": "spare", 00:21:35.153 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:35.153 "is_configured": true, 00:21:35.153 "data_offset": 0, 00:21:35.153 "data_size": 65536 00:21:35.153 }, 00:21:35.153 { 00:21:35.153 "name": "BaseBdev2", 00:21:35.153 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:35.153 "is_configured": true, 00:21:35.153 "data_offset": 0, 00:21:35.153 "data_size": 65536 00:21:35.153 }, 00:21:35.153 { 00:21:35.153 "name": "BaseBdev3", 00:21:35.153 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:35.153 "is_configured": true, 00:21:35.153 "data_offset": 0, 00:21:35.153 "data_size": 65536 00:21:35.153 }, 00:21:35.153 { 00:21:35.153 "name": "BaseBdev4", 00:21:35.153 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:35.153 "is_configured": true, 00:21:35.153 "data_offset": 0, 00:21:35.153 "data_size": 65536 00:21:35.153 } 00:21:35.153 ] 00:21:35.153 }' 00:21:35.153 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.153 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.153 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.153 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.153 10:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.090 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.090 "name": "raid_bdev1", 00:21:36.090 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:36.090 "strip_size_kb": 64, 00:21:36.090 "state": "online", 00:21:36.090 "raid_level": "raid5f", 00:21:36.090 "superblock": false, 00:21:36.090 "num_base_bdevs": 4, 00:21:36.090 "num_base_bdevs_discovered": 4, 00:21:36.090 "num_base_bdevs_operational": 4, 00:21:36.090 "process": { 00:21:36.090 "type": "rebuild", 00:21:36.090 "target": "spare", 00:21:36.090 "progress": { 00:21:36.090 "blocks": 65280, 00:21:36.090 "percent": 33 00:21:36.090 } 00:21:36.091 }, 00:21:36.091 "base_bdevs_list": [ 00:21:36.091 { 00:21:36.091 "name": "spare", 00:21:36.091 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:36.091 "is_configured": true, 00:21:36.091 "data_offset": 0, 00:21:36.091 "data_size": 65536 00:21:36.091 }, 00:21:36.091 { 00:21:36.091 "name": "BaseBdev2", 00:21:36.091 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:36.091 "is_configured": true, 00:21:36.091 "data_offset": 0, 00:21:36.091 "data_size": 65536 00:21:36.091 }, 00:21:36.091 { 00:21:36.091 "name": "BaseBdev3", 00:21:36.091 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:36.091 "is_configured": true, 00:21:36.091 "data_offset": 0, 00:21:36.091 "data_size": 65536 00:21:36.091 }, 00:21:36.091 { 00:21:36.091 "name": "BaseBdev4", 00:21:36.091 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:36.091 "is_configured": true, 00:21:36.091 "data_offset": 0, 00:21:36.091 "data_size": 65536 00:21:36.091 } 00:21:36.091 ] 00:21:36.091 }' 00:21:36.091 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.350 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:36.350 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.350 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.350 10:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.288 "name": "raid_bdev1", 00:21:37.288 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:37.288 "strip_size_kb": 64, 00:21:37.288 "state": "online", 00:21:37.288 "raid_level": "raid5f", 00:21:37.288 "superblock": false, 00:21:37.288 "num_base_bdevs": 4, 00:21:37.288 "num_base_bdevs_discovered": 4, 00:21:37.288 "num_base_bdevs_operational": 4, 00:21:37.288 "process": { 00:21:37.288 "type": "rebuild", 00:21:37.288 "target": "spare", 00:21:37.288 "progress": { 00:21:37.288 "blocks": 88320, 00:21:37.288 "percent": 44 00:21:37.288 } 00:21:37.288 }, 00:21:37.288 "base_bdevs_list": [ 00:21:37.288 { 00:21:37.288 "name": "spare", 00:21:37.288 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:37.288 "is_configured": true, 00:21:37.288 "data_offset": 0, 00:21:37.288 "data_size": 65536 00:21:37.288 }, 00:21:37.288 { 00:21:37.288 "name": "BaseBdev2", 00:21:37.288 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:37.288 "is_configured": true, 00:21:37.288 "data_offset": 0, 00:21:37.288 "data_size": 65536 00:21:37.288 }, 00:21:37.288 { 00:21:37.288 "name": "BaseBdev3", 00:21:37.288 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:37.288 "is_configured": true, 00:21:37.288 "data_offset": 0, 00:21:37.288 "data_size": 65536 00:21:37.288 }, 00:21:37.288 { 00:21:37.288 "name": "BaseBdev4", 00:21:37.288 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:37.288 "is_configured": true, 00:21:37.288 "data_offset": 0, 00:21:37.288 "data_size": 65536 00:21:37.288 } 00:21:37.288 ] 00:21:37.288 }' 00:21:37.288 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.548 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.548 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.548 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.548 10:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.485 "name": "raid_bdev1", 00:21:38.485 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:38.485 "strip_size_kb": 64, 00:21:38.485 "state": "online", 00:21:38.485 "raid_level": "raid5f", 00:21:38.485 "superblock": false, 00:21:38.485 "num_base_bdevs": 4, 00:21:38.485 "num_base_bdevs_discovered": 4, 00:21:38.485 "num_base_bdevs_operational": 4, 00:21:38.485 "process": { 00:21:38.485 "type": "rebuild", 00:21:38.485 "target": "spare", 00:21:38.485 "progress": { 00:21:38.485 "blocks": 109440, 00:21:38.485 "percent": 55 00:21:38.485 } 00:21:38.485 }, 00:21:38.485 "base_bdevs_list": [ 00:21:38.485 { 00:21:38.485 "name": "spare", 00:21:38.485 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:38.485 "is_configured": true, 00:21:38.485 "data_offset": 0, 00:21:38.485 "data_size": 65536 00:21:38.485 }, 00:21:38.485 { 00:21:38.485 "name": "BaseBdev2", 00:21:38.485 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:38.485 "is_configured": true, 00:21:38.485 "data_offset": 0, 00:21:38.485 "data_size": 65536 00:21:38.485 }, 00:21:38.485 { 00:21:38.485 "name": "BaseBdev3", 00:21:38.485 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:38.485 "is_configured": true, 00:21:38.485 "data_offset": 0, 00:21:38.485 "data_size": 65536 00:21:38.485 }, 00:21:38.485 { 00:21:38.485 "name": "BaseBdev4", 00:21:38.485 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:38.485 "is_configured": true, 00:21:38.485 "data_offset": 0, 00:21:38.485 "data_size": 65536 00:21:38.485 } 00:21:38.485 ] 00:21:38.485 }' 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.485 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.743 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.743 10:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:39.679 10:50:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:39.679 10:50:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.680 10:50:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.680 10:50:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:39.680 10:50:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:39.680 10:50:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.680 10:50:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.680 10:50:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.680 10:50:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.680 10:50:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.680 10:50:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.680 10:50:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.680 "name": "raid_bdev1", 00:21:39.680 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:39.680 "strip_size_kb": 64, 00:21:39.680 "state": "online", 00:21:39.680 "raid_level": "raid5f", 00:21:39.680 "superblock": false, 00:21:39.680 "num_base_bdevs": 4, 00:21:39.680 "num_base_bdevs_discovered": 4, 00:21:39.680 "num_base_bdevs_operational": 4, 00:21:39.680 "process": { 00:21:39.680 "type": "rebuild", 00:21:39.680 "target": "spare", 00:21:39.680 "progress": { 00:21:39.680 "blocks": 132480, 00:21:39.680 "percent": 67 00:21:39.680 } 00:21:39.680 }, 00:21:39.680 "base_bdevs_list": [ 00:21:39.680 { 00:21:39.680 "name": "spare", 00:21:39.680 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:39.680 "is_configured": true, 00:21:39.680 "data_offset": 0, 00:21:39.680 "data_size": 65536 00:21:39.680 }, 00:21:39.680 { 00:21:39.680 "name": "BaseBdev2", 00:21:39.680 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:39.680 "is_configured": true, 00:21:39.680 "data_offset": 0, 00:21:39.680 "data_size": 65536 00:21:39.680 }, 00:21:39.680 { 00:21:39.680 "name": "BaseBdev3", 00:21:39.680 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:39.680 "is_configured": true, 00:21:39.680 "data_offset": 0, 00:21:39.680 "data_size": 65536 00:21:39.680 }, 00:21:39.680 { 00:21:39.680 "name": "BaseBdev4", 00:21:39.680 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:39.680 "is_configured": true, 00:21:39.680 "data_offset": 0, 00:21:39.680 "data_size": 65536 00:21:39.680 } 00:21:39.680 ] 00:21:39.680 }' 00:21:39.680 10:50:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.680 10:50:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:39.680 10:50:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.938 10:50:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:39.938 10:50:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:40.872 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:40.872 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:40.872 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:40.872 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:40.872 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:40.872 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:40.872 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:40.873 "name": "raid_bdev1", 00:21:40.873 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:40.873 "strip_size_kb": 64, 00:21:40.873 "state": "online", 00:21:40.873 "raid_level": "raid5f", 00:21:40.873 "superblock": false, 00:21:40.873 "num_base_bdevs": 4, 00:21:40.873 "num_base_bdevs_discovered": 4, 00:21:40.873 "num_base_bdevs_operational": 4, 00:21:40.873 "process": { 00:21:40.873 "type": "rebuild", 00:21:40.873 "target": "spare", 00:21:40.873 "progress": { 00:21:40.873 "blocks": 153600, 00:21:40.873 "percent": 78 00:21:40.873 } 00:21:40.873 }, 00:21:40.873 "base_bdevs_list": [ 00:21:40.873 { 00:21:40.873 "name": "spare", 00:21:40.873 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:40.873 "is_configured": true, 00:21:40.873 "data_offset": 0, 00:21:40.873 "data_size": 65536 00:21:40.873 }, 00:21:40.873 { 00:21:40.873 "name": "BaseBdev2", 00:21:40.873 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:40.873 "is_configured": true, 00:21:40.873 "data_offset": 0, 00:21:40.873 "data_size": 65536 00:21:40.873 }, 00:21:40.873 { 00:21:40.873 "name": "BaseBdev3", 00:21:40.873 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:40.873 "is_configured": true, 00:21:40.873 "data_offset": 0, 00:21:40.873 "data_size": 65536 00:21:40.873 }, 00:21:40.873 { 00:21:40.873 "name": "BaseBdev4", 00:21:40.873 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:40.873 "is_configured": true, 00:21:40.873 "data_offset": 0, 00:21:40.873 "data_size": 65536 00:21:40.873 } 00:21:40.873 ] 00:21:40.873 }' 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:40.873 10:50:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:42.250 "name": "raid_bdev1", 00:21:42.250 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:42.250 "strip_size_kb": 64, 00:21:42.250 "state": "online", 00:21:42.250 "raid_level": "raid5f", 00:21:42.250 "superblock": false, 00:21:42.250 "num_base_bdevs": 4, 00:21:42.250 "num_base_bdevs_discovered": 4, 00:21:42.250 "num_base_bdevs_operational": 4, 00:21:42.250 "process": { 00:21:42.250 "type": "rebuild", 00:21:42.250 "target": "spare", 00:21:42.250 "progress": { 00:21:42.250 "blocks": 176640, 00:21:42.250 "percent": 89 00:21:42.250 } 00:21:42.250 }, 00:21:42.250 "base_bdevs_list": [ 00:21:42.250 { 00:21:42.250 "name": "spare", 00:21:42.250 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:42.250 "is_configured": true, 00:21:42.250 "data_offset": 0, 00:21:42.250 "data_size": 65536 00:21:42.250 }, 00:21:42.250 { 00:21:42.250 "name": "BaseBdev2", 00:21:42.250 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:42.250 "is_configured": true, 00:21:42.250 "data_offset": 0, 00:21:42.250 "data_size": 65536 00:21:42.250 }, 00:21:42.250 { 00:21:42.250 "name": "BaseBdev3", 00:21:42.250 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:42.250 "is_configured": true, 00:21:42.250 "data_offset": 0, 00:21:42.250 "data_size": 65536 00:21:42.250 }, 00:21:42.250 { 00:21:42.250 "name": "BaseBdev4", 00:21:42.250 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:42.250 "is_configured": true, 00:21:42.250 "data_offset": 0, 00:21:42.250 "data_size": 65536 00:21:42.250 } 00:21:42.250 ] 00:21:42.250 }' 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:42.250 10:50:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:43.187 [2024-10-30 10:50:04.400380] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:43.187 [2024-10-30 10:50:04.400513] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:43.187 [2024-10-30 10:50:04.400598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.187 "name": "raid_bdev1", 00:21:43.187 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:43.187 "strip_size_kb": 64, 00:21:43.187 "state": "online", 00:21:43.187 "raid_level": "raid5f", 00:21:43.187 "superblock": false, 00:21:43.187 "num_base_bdevs": 4, 00:21:43.187 "num_base_bdevs_discovered": 4, 00:21:43.187 "num_base_bdevs_operational": 4, 00:21:43.187 "base_bdevs_list": [ 00:21:43.187 { 00:21:43.187 "name": "spare", 00:21:43.187 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:43.187 "is_configured": true, 00:21:43.187 "data_offset": 0, 00:21:43.187 "data_size": 65536 00:21:43.187 }, 00:21:43.187 { 00:21:43.187 "name": "BaseBdev2", 00:21:43.187 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:43.187 "is_configured": true, 00:21:43.187 "data_offset": 0, 00:21:43.187 "data_size": 65536 00:21:43.187 }, 00:21:43.187 { 00:21:43.187 "name": "BaseBdev3", 00:21:43.187 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:43.187 "is_configured": true, 00:21:43.187 "data_offset": 0, 00:21:43.187 "data_size": 65536 00:21:43.187 }, 00:21:43.187 { 00:21:43.187 "name": "BaseBdev4", 00:21:43.187 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:43.187 "is_configured": true, 00:21:43.187 "data_offset": 0, 00:21:43.187 "data_size": 65536 00:21:43.187 } 00:21:43.187 ] 00:21:43.187 }' 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.187 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.188 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:43.447 "name": "raid_bdev1", 00:21:43.447 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:43.447 "strip_size_kb": 64, 00:21:43.447 "state": "online", 00:21:43.447 "raid_level": "raid5f", 00:21:43.447 "superblock": false, 00:21:43.447 "num_base_bdevs": 4, 00:21:43.447 "num_base_bdevs_discovered": 4, 00:21:43.447 "num_base_bdevs_operational": 4, 00:21:43.447 "base_bdevs_list": [ 00:21:43.447 { 00:21:43.447 "name": "spare", 00:21:43.447 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:43.447 "is_configured": true, 00:21:43.447 "data_offset": 0, 00:21:43.447 "data_size": 65536 00:21:43.447 }, 00:21:43.447 { 00:21:43.447 "name": "BaseBdev2", 00:21:43.447 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:43.447 "is_configured": true, 00:21:43.447 "data_offset": 0, 00:21:43.447 "data_size": 65536 00:21:43.447 }, 00:21:43.447 { 00:21:43.447 "name": "BaseBdev3", 00:21:43.447 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:43.447 "is_configured": true, 00:21:43.447 "data_offset": 0, 00:21:43.447 "data_size": 65536 00:21:43.447 }, 00:21:43.447 { 00:21:43.447 "name": "BaseBdev4", 00:21:43.447 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:43.447 "is_configured": true, 00:21:43.447 "data_offset": 0, 00:21:43.447 "data_size": 65536 00:21:43.447 } 00:21:43.447 ] 00:21:43.447 }' 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.447 "name": "raid_bdev1", 00:21:43.447 "uuid": "b43c46d3-a3db-40fa-b62e-dde72c7b905e", 00:21:43.447 "strip_size_kb": 64, 00:21:43.447 "state": "online", 00:21:43.447 "raid_level": "raid5f", 00:21:43.447 "superblock": false, 00:21:43.447 "num_base_bdevs": 4, 00:21:43.447 "num_base_bdevs_discovered": 4, 00:21:43.447 "num_base_bdevs_operational": 4, 00:21:43.447 "base_bdevs_list": [ 00:21:43.447 { 00:21:43.447 "name": "spare", 00:21:43.447 "uuid": "d12f3c40-5a6e-5d24-bec3-e59007ecbe50", 00:21:43.447 "is_configured": true, 00:21:43.447 "data_offset": 0, 00:21:43.447 "data_size": 65536 00:21:43.447 }, 00:21:43.447 { 00:21:43.447 "name": "BaseBdev2", 00:21:43.447 "uuid": "73c2ee3a-4fb9-5354-9795-fc8e848be38b", 00:21:43.447 "is_configured": true, 00:21:43.447 "data_offset": 0, 00:21:43.447 "data_size": 65536 00:21:43.447 }, 00:21:43.447 { 00:21:43.447 "name": "BaseBdev3", 00:21:43.447 "uuid": "ec1215b4-0823-542c-8bee-80c9ca9e1f5c", 00:21:43.447 "is_configured": true, 00:21:43.447 "data_offset": 0, 00:21:43.447 "data_size": 65536 00:21:43.447 }, 00:21:43.447 { 00:21:43.447 "name": "BaseBdev4", 00:21:43.447 "uuid": "b0f6c2e3-cce2-5d41-b683-291fa064a9fd", 00:21:43.447 "is_configured": true, 00:21:43.447 "data_offset": 0, 00:21:43.447 "data_size": 65536 00:21:43.447 } 00:21:43.447 ] 00:21:43.447 }' 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.447 10:50:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.014 [2024-10-30 10:50:05.360014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:44.014 [2024-10-30 10:50:05.360192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.014 [2024-10-30 10:50:05.360314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.014 [2024-10-30 10:50:05.360449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:44.014 [2024-10-30 10:50:05.360468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:44.014 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:44.579 /dev/nbd0 00:21:44.579 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:44.579 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:44.579 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:44.580 1+0 records in 00:21:44.580 1+0 records out 00:21:44.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297706 s, 13.8 MB/s 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:44.580 10:50:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:44.838 /dev/nbd1 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:44.838 1+0 records in 00:21:44.838 1+0 records out 00:21:44.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445809 s, 9.2 MB/s 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:44.838 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:45.096 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:45.096 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:45.096 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:45.096 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:45.096 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:45.096 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.096 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:45.355 10:50:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85165 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 85165 ']' 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 85165 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85165 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:45.615 killing process with pid 85165 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85165' 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 85165 00:21:45.615 Received shutdown signal, test time was about 60.000000 seconds 00:21:45.615 00:21:45.615 Latency(us) 00:21:45.615 [2024-10-30T10:50:07.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.615 [2024-10-30T10:50:07.085Z] =================================================================================================================== 00:21:45.615 [2024-10-30T10:50:07.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:45.615 [2024-10-30 10:50:07.060414] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:45.615 10:50:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 85165 00:21:46.183 [2024-10-30 10:50:07.515038] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:47.120 10:50:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:21:47.120 00:21:47.120 real 0m20.508s 00:21:47.120 user 0m25.603s 00:21:47.120 sys 0m2.440s 00:21:47.120 10:50:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:47.120 10:50:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.120 ************************************ 00:21:47.120 END TEST raid5f_rebuild_test 00:21:47.120 ************************************ 00:21:47.379 10:50:08 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:21:47.379 10:50:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:47.379 10:50:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:47.379 10:50:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.379 ************************************ 00:21:47.379 START TEST raid5f_rebuild_test_sb 00:21:47.380 ************************************ 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85678 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85678 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85678 ']' 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:47.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:47.380 10:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.380 [2024-10-30 10:50:08.743520] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:21:47.380 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:47.380 Zero copy mechanism will not be used. 00:21:47.380 [2024-10-30 10:50:08.743729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85678 ] 00:21:47.639 [2024-10-30 10:50:08.934386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.639 [2024-10-30 10:50:09.075694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.898 [2024-10-30 10:50:09.291644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:47.898 [2024-10-30 10:50:09.291703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.465 BaseBdev1_malloc 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.465 [2024-10-30 10:50:09.748103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:48.465 [2024-10-30 10:50:09.748201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.465 [2024-10-30 10:50:09.748232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:48.465 [2024-10-30 10:50:09.748250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.465 [2024-10-30 10:50:09.750905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.465 [2024-10-30 10:50:09.750981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:48.465 BaseBdev1 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.465 BaseBdev2_malloc 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.465 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.465 [2024-10-30 10:50:09.803813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:48.465 [2024-10-30 10:50:09.803889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.466 [2024-10-30 10:50:09.803917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:48.466 [2024-10-30 10:50:09.803936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.466 [2024-10-30 10:50:09.806674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.466 [2024-10-30 10:50:09.806733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:48.466 BaseBdev2 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.466 BaseBdev3_malloc 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.466 [2024-10-30 10:50:09.873928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:48.466 [2024-10-30 10:50:09.874045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.466 [2024-10-30 10:50:09.874076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:48.466 [2024-10-30 10:50:09.874095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.466 [2024-10-30 10:50:09.876799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.466 [2024-10-30 10:50:09.876879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:48.466 BaseBdev3 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.466 BaseBdev4_malloc 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.466 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.466 [2024-10-30 10:50:09.931663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:48.466 [2024-10-30 10:50:09.931765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.466 [2024-10-30 10:50:09.931793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:48.466 [2024-10-30 10:50:09.931810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.466 [2024-10-30 10:50:09.934936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.466 [2024-10-30 10:50:09.935014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:48.724 BaseBdev4 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.724 spare_malloc 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.724 spare_delay 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.724 10:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.724 [2024-10-30 10:50:10.004748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:48.724 [2024-10-30 10:50:10.004843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.724 [2024-10-30 10:50:10.004874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:48.724 [2024-10-30 10:50:10.004905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.724 [2024-10-30 10:50:10.007939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.724 [2024-10-30 10:50:10.008093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:48.724 spare 00:21:48.724 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.724 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:48.724 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.724 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.724 [2024-10-30 10:50:10.017044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.724 [2024-10-30 10:50:10.019697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:48.724 [2024-10-30 10:50:10.019805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:48.724 [2024-10-30 10:50:10.019941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:48.724 [2024-10-30 10:50:10.020289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:48.724 [2024-10-30 10:50:10.020325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:48.724 [2024-10-30 10:50:10.020714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:48.724 [2024-10-30 10:50:10.027506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:48.725 [2024-10-30 10:50:10.027547] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:48.725 [2024-10-30 10:50:10.027896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.725 "name": "raid_bdev1", 00:21:48.725 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:48.725 "strip_size_kb": 64, 00:21:48.725 "state": "online", 00:21:48.725 "raid_level": "raid5f", 00:21:48.725 "superblock": true, 00:21:48.725 "num_base_bdevs": 4, 00:21:48.725 "num_base_bdevs_discovered": 4, 00:21:48.725 "num_base_bdevs_operational": 4, 00:21:48.725 "base_bdevs_list": [ 00:21:48.725 { 00:21:48.725 "name": "BaseBdev1", 00:21:48.725 "uuid": "4d28ca0b-6571-536b-be20-e2e03c315179", 00:21:48.725 "is_configured": true, 00:21:48.725 "data_offset": 2048, 00:21:48.725 "data_size": 63488 00:21:48.725 }, 00:21:48.725 { 00:21:48.725 "name": "BaseBdev2", 00:21:48.725 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:48.725 "is_configured": true, 00:21:48.725 "data_offset": 2048, 00:21:48.725 "data_size": 63488 00:21:48.725 }, 00:21:48.725 { 00:21:48.725 "name": "BaseBdev3", 00:21:48.725 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:48.725 "is_configured": true, 00:21:48.725 "data_offset": 2048, 00:21:48.725 "data_size": 63488 00:21:48.725 }, 00:21:48.725 { 00:21:48.725 "name": "BaseBdev4", 00:21:48.725 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:48.725 "is_configured": true, 00:21:48.725 "data_offset": 2048, 00:21:48.725 "data_size": 63488 00:21:48.725 } 00:21:48.725 ] 00:21:48.725 }' 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.725 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:49.293 [2024-10-30 10:50:10.556295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.293 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:49.552 [2024-10-30 10:50:10.920226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:49.552 /dev/nbd0 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:49.552 1+0 records in 00:21:49.552 1+0 records out 00:21:49.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275185 s, 14.9 MB/s 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:49.552 10:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:21:50.485 496+0 records in 00:21:50.485 496+0 records out 00:21:50.485 97517568 bytes (98 MB, 93 MiB) copied, 0.611731 s, 159 MB/s 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:50.485 [2024-10-30 10:50:11.905702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:50.485 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.486 [2024-10-30 10:50:11.925257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.486 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.744 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.744 "name": "raid_bdev1", 00:21:50.744 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:50.744 "strip_size_kb": 64, 00:21:50.744 "state": "online", 00:21:50.744 "raid_level": "raid5f", 00:21:50.744 "superblock": true, 00:21:50.744 "num_base_bdevs": 4, 00:21:50.744 "num_base_bdevs_discovered": 3, 00:21:50.744 "num_base_bdevs_operational": 3, 00:21:50.744 "base_bdevs_list": [ 00:21:50.744 { 00:21:50.744 "name": null, 00:21:50.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.744 "is_configured": false, 00:21:50.744 "data_offset": 0, 00:21:50.744 "data_size": 63488 00:21:50.744 }, 00:21:50.744 { 00:21:50.744 "name": "BaseBdev2", 00:21:50.744 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:50.744 "is_configured": true, 00:21:50.744 "data_offset": 2048, 00:21:50.744 "data_size": 63488 00:21:50.744 }, 00:21:50.744 { 00:21:50.744 "name": "BaseBdev3", 00:21:50.744 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:50.744 "is_configured": true, 00:21:50.744 "data_offset": 2048, 00:21:50.744 "data_size": 63488 00:21:50.744 }, 00:21:50.744 { 00:21:50.744 "name": "BaseBdev4", 00:21:50.744 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:50.744 "is_configured": true, 00:21:50.744 "data_offset": 2048, 00:21:50.744 "data_size": 63488 00:21:50.744 } 00:21:50.744 ] 00:21:50.744 }' 00:21:50.744 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.744 10:50:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.003 10:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:51.003 10:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.003 10:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.003 [2024-10-30 10:50:12.445413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.003 [2024-10-30 10:50:12.460560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:21:51.003 10:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.003 10:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:51.003 [2024-10-30 10:50:12.469582] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.399 "name": "raid_bdev1", 00:21:52.399 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:52.399 "strip_size_kb": 64, 00:21:52.399 "state": "online", 00:21:52.399 "raid_level": "raid5f", 00:21:52.399 "superblock": true, 00:21:52.399 "num_base_bdevs": 4, 00:21:52.399 "num_base_bdevs_discovered": 4, 00:21:52.399 "num_base_bdevs_operational": 4, 00:21:52.399 "process": { 00:21:52.399 "type": "rebuild", 00:21:52.399 "target": "spare", 00:21:52.399 "progress": { 00:21:52.399 "blocks": 17280, 00:21:52.399 "percent": 9 00:21:52.399 } 00:21:52.399 }, 00:21:52.399 "base_bdevs_list": [ 00:21:52.399 { 00:21:52.399 "name": "spare", 00:21:52.399 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:21:52.399 "is_configured": true, 00:21:52.399 "data_offset": 2048, 00:21:52.399 "data_size": 63488 00:21:52.399 }, 00:21:52.399 { 00:21:52.399 "name": "BaseBdev2", 00:21:52.399 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:52.399 "is_configured": true, 00:21:52.399 "data_offset": 2048, 00:21:52.399 "data_size": 63488 00:21:52.399 }, 00:21:52.399 { 00:21:52.399 "name": "BaseBdev3", 00:21:52.399 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:52.399 "is_configured": true, 00:21:52.399 "data_offset": 2048, 00:21:52.399 "data_size": 63488 00:21:52.399 }, 00:21:52.399 { 00:21:52.399 "name": "BaseBdev4", 00:21:52.399 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:52.399 "is_configured": true, 00:21:52.399 "data_offset": 2048, 00:21:52.399 "data_size": 63488 00:21:52.399 } 00:21:52.399 ] 00:21:52.399 }' 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.399 [2024-10-30 10:50:13.638864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.399 [2024-10-30 10:50:13.681712] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:52.399 [2024-10-30 10:50:13.681829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.399 [2024-10-30 10:50:13.681854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.399 [2024-10-30 10:50:13.681870] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.399 "name": "raid_bdev1", 00:21:52.399 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:52.399 "strip_size_kb": 64, 00:21:52.399 "state": "online", 00:21:52.399 "raid_level": "raid5f", 00:21:52.399 "superblock": true, 00:21:52.399 "num_base_bdevs": 4, 00:21:52.399 "num_base_bdevs_discovered": 3, 00:21:52.399 "num_base_bdevs_operational": 3, 00:21:52.399 "base_bdevs_list": [ 00:21:52.399 { 00:21:52.399 "name": null, 00:21:52.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.399 "is_configured": false, 00:21:52.399 "data_offset": 0, 00:21:52.399 "data_size": 63488 00:21:52.399 }, 00:21:52.399 { 00:21:52.399 "name": "BaseBdev2", 00:21:52.399 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:52.399 "is_configured": true, 00:21:52.399 "data_offset": 2048, 00:21:52.399 "data_size": 63488 00:21:52.399 }, 00:21:52.399 { 00:21:52.399 "name": "BaseBdev3", 00:21:52.399 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:52.399 "is_configured": true, 00:21:52.399 "data_offset": 2048, 00:21:52.399 "data_size": 63488 00:21:52.399 }, 00:21:52.399 { 00:21:52.399 "name": "BaseBdev4", 00:21:52.399 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:52.399 "is_configured": true, 00:21:52.399 "data_offset": 2048, 00:21:52.399 "data_size": 63488 00:21:52.399 } 00:21:52.399 ] 00:21:52.399 }' 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.399 10:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.968 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:52.968 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.968 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:52.968 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:52.968 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.968 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.968 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.969 "name": "raid_bdev1", 00:21:52.969 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:52.969 "strip_size_kb": 64, 00:21:52.969 "state": "online", 00:21:52.969 "raid_level": "raid5f", 00:21:52.969 "superblock": true, 00:21:52.969 "num_base_bdevs": 4, 00:21:52.969 "num_base_bdevs_discovered": 3, 00:21:52.969 "num_base_bdevs_operational": 3, 00:21:52.969 "base_bdevs_list": [ 00:21:52.969 { 00:21:52.969 "name": null, 00:21:52.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.969 "is_configured": false, 00:21:52.969 "data_offset": 0, 00:21:52.969 "data_size": 63488 00:21:52.969 }, 00:21:52.969 { 00:21:52.969 "name": "BaseBdev2", 00:21:52.969 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:52.969 "is_configured": true, 00:21:52.969 "data_offset": 2048, 00:21:52.969 "data_size": 63488 00:21:52.969 }, 00:21:52.969 { 00:21:52.969 "name": "BaseBdev3", 00:21:52.969 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:52.969 "is_configured": true, 00:21:52.969 "data_offset": 2048, 00:21:52.969 "data_size": 63488 00:21:52.969 }, 00:21:52.969 { 00:21:52.969 "name": "BaseBdev4", 00:21:52.969 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:52.969 "is_configured": true, 00:21:52.969 "data_offset": 2048, 00:21:52.969 "data_size": 63488 00:21:52.969 } 00:21:52.969 ] 00:21:52.969 }' 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.969 [2024-10-30 10:50:14.404959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.969 [2024-10-30 10:50:14.419146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.969 10:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:52.969 [2024-10-30 10:50:14.427912] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.348 "name": "raid_bdev1", 00:21:54.348 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:54.348 "strip_size_kb": 64, 00:21:54.348 "state": "online", 00:21:54.348 "raid_level": "raid5f", 00:21:54.348 "superblock": true, 00:21:54.348 "num_base_bdevs": 4, 00:21:54.348 "num_base_bdevs_discovered": 4, 00:21:54.348 "num_base_bdevs_operational": 4, 00:21:54.348 "process": { 00:21:54.348 "type": "rebuild", 00:21:54.348 "target": "spare", 00:21:54.348 "progress": { 00:21:54.348 "blocks": 17280, 00:21:54.348 "percent": 9 00:21:54.348 } 00:21:54.348 }, 00:21:54.348 "base_bdevs_list": [ 00:21:54.348 { 00:21:54.348 "name": "spare", 00:21:54.348 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:21:54.348 "is_configured": true, 00:21:54.348 "data_offset": 2048, 00:21:54.348 "data_size": 63488 00:21:54.348 }, 00:21:54.348 { 00:21:54.348 "name": "BaseBdev2", 00:21:54.348 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:54.348 "is_configured": true, 00:21:54.348 "data_offset": 2048, 00:21:54.348 "data_size": 63488 00:21:54.348 }, 00:21:54.348 { 00:21:54.348 "name": "BaseBdev3", 00:21:54.348 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:54.348 "is_configured": true, 00:21:54.348 "data_offset": 2048, 00:21:54.348 "data_size": 63488 00:21:54.348 }, 00:21:54.348 { 00:21:54.348 "name": "BaseBdev4", 00:21:54.348 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:54.348 "is_configured": true, 00:21:54.348 "data_offset": 2048, 00:21:54.348 "data_size": 63488 00:21:54.348 } 00:21:54.348 ] 00:21:54.348 }' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:54.348 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=689 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.348 "name": "raid_bdev1", 00:21:54.348 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:54.348 "strip_size_kb": 64, 00:21:54.348 "state": "online", 00:21:54.348 "raid_level": "raid5f", 00:21:54.348 "superblock": true, 00:21:54.348 "num_base_bdevs": 4, 00:21:54.348 "num_base_bdevs_discovered": 4, 00:21:54.348 "num_base_bdevs_operational": 4, 00:21:54.348 "process": { 00:21:54.348 "type": "rebuild", 00:21:54.348 "target": "spare", 00:21:54.348 "progress": { 00:21:54.348 "blocks": 21120, 00:21:54.348 "percent": 11 00:21:54.348 } 00:21:54.348 }, 00:21:54.348 "base_bdevs_list": [ 00:21:54.348 { 00:21:54.348 "name": "spare", 00:21:54.348 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:21:54.348 "is_configured": true, 00:21:54.348 "data_offset": 2048, 00:21:54.348 "data_size": 63488 00:21:54.348 }, 00:21:54.348 { 00:21:54.348 "name": "BaseBdev2", 00:21:54.348 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:54.348 "is_configured": true, 00:21:54.348 "data_offset": 2048, 00:21:54.348 "data_size": 63488 00:21:54.348 }, 00:21:54.348 { 00:21:54.348 "name": "BaseBdev3", 00:21:54.348 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:54.348 "is_configured": true, 00:21:54.348 "data_offset": 2048, 00:21:54.348 "data_size": 63488 00:21:54.348 }, 00:21:54.348 { 00:21:54.348 "name": "BaseBdev4", 00:21:54.348 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:54.348 "is_configured": true, 00:21:54.348 "data_offset": 2048, 00:21:54.348 "data_size": 63488 00:21:54.348 } 00:21:54.348 ] 00:21:54.348 }' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.348 10:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.283 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:55.543 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.543 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.543 "name": "raid_bdev1", 00:21:55.543 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:55.543 "strip_size_kb": 64, 00:21:55.543 "state": "online", 00:21:55.543 "raid_level": "raid5f", 00:21:55.543 "superblock": true, 00:21:55.543 "num_base_bdevs": 4, 00:21:55.543 "num_base_bdevs_discovered": 4, 00:21:55.543 "num_base_bdevs_operational": 4, 00:21:55.543 "process": { 00:21:55.543 "type": "rebuild", 00:21:55.543 "target": "spare", 00:21:55.543 "progress": { 00:21:55.543 "blocks": 44160, 00:21:55.543 "percent": 23 00:21:55.543 } 00:21:55.543 }, 00:21:55.543 "base_bdevs_list": [ 00:21:55.543 { 00:21:55.543 "name": "spare", 00:21:55.543 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:21:55.543 "is_configured": true, 00:21:55.543 "data_offset": 2048, 00:21:55.543 "data_size": 63488 00:21:55.543 }, 00:21:55.543 { 00:21:55.543 "name": "BaseBdev2", 00:21:55.543 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:55.543 "is_configured": true, 00:21:55.543 "data_offset": 2048, 00:21:55.543 "data_size": 63488 00:21:55.543 }, 00:21:55.543 { 00:21:55.543 "name": "BaseBdev3", 00:21:55.543 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:55.543 "is_configured": true, 00:21:55.543 "data_offset": 2048, 00:21:55.543 "data_size": 63488 00:21:55.543 }, 00:21:55.543 { 00:21:55.543 "name": "BaseBdev4", 00:21:55.543 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:55.543 "is_configured": true, 00:21:55.543 "data_offset": 2048, 00:21:55.543 "data_size": 63488 00:21:55.543 } 00:21:55.543 ] 00:21:55.543 }' 00:21:55.543 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.543 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.543 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.543 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.543 10:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:56.480 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.740 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.740 "name": "raid_bdev1", 00:21:56.740 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:56.740 "strip_size_kb": 64, 00:21:56.740 "state": "online", 00:21:56.740 "raid_level": "raid5f", 00:21:56.740 "superblock": true, 00:21:56.740 "num_base_bdevs": 4, 00:21:56.740 "num_base_bdevs_discovered": 4, 00:21:56.740 "num_base_bdevs_operational": 4, 00:21:56.740 "process": { 00:21:56.740 "type": "rebuild", 00:21:56.740 "target": "spare", 00:21:56.740 "progress": { 00:21:56.740 "blocks": 65280, 00:21:56.740 "percent": 34 00:21:56.740 } 00:21:56.740 }, 00:21:56.740 "base_bdevs_list": [ 00:21:56.740 { 00:21:56.740 "name": "spare", 00:21:56.740 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:21:56.740 "is_configured": true, 00:21:56.740 "data_offset": 2048, 00:21:56.740 "data_size": 63488 00:21:56.740 }, 00:21:56.740 { 00:21:56.740 "name": "BaseBdev2", 00:21:56.740 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:56.740 "is_configured": true, 00:21:56.740 "data_offset": 2048, 00:21:56.740 "data_size": 63488 00:21:56.740 }, 00:21:56.740 { 00:21:56.740 "name": "BaseBdev3", 00:21:56.740 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:56.740 "is_configured": true, 00:21:56.740 "data_offset": 2048, 00:21:56.740 "data_size": 63488 00:21:56.740 }, 00:21:56.740 { 00:21:56.740 "name": "BaseBdev4", 00:21:56.740 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:56.740 "is_configured": true, 00:21:56.740 "data_offset": 2048, 00:21:56.740 "data_size": 63488 00:21:56.740 } 00:21:56.740 ] 00:21:56.740 }' 00:21:56.740 10:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.740 10:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:56.740 10:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.740 10:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:56.740 10:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.676 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.676 "name": "raid_bdev1", 00:21:57.676 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:57.676 "strip_size_kb": 64, 00:21:57.676 "state": "online", 00:21:57.676 "raid_level": "raid5f", 00:21:57.676 "superblock": true, 00:21:57.676 "num_base_bdevs": 4, 00:21:57.676 "num_base_bdevs_discovered": 4, 00:21:57.676 "num_base_bdevs_operational": 4, 00:21:57.676 "process": { 00:21:57.676 "type": "rebuild", 00:21:57.676 "target": "spare", 00:21:57.676 "progress": { 00:21:57.676 "blocks": 88320, 00:21:57.676 "percent": 46 00:21:57.676 } 00:21:57.676 }, 00:21:57.676 "base_bdevs_list": [ 00:21:57.676 { 00:21:57.676 "name": "spare", 00:21:57.676 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:21:57.676 "is_configured": true, 00:21:57.676 "data_offset": 2048, 00:21:57.676 "data_size": 63488 00:21:57.677 }, 00:21:57.677 { 00:21:57.677 "name": "BaseBdev2", 00:21:57.677 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:57.677 "is_configured": true, 00:21:57.677 "data_offset": 2048, 00:21:57.677 "data_size": 63488 00:21:57.677 }, 00:21:57.677 { 00:21:57.677 "name": "BaseBdev3", 00:21:57.677 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:57.677 "is_configured": true, 00:21:57.677 "data_offset": 2048, 00:21:57.677 "data_size": 63488 00:21:57.677 }, 00:21:57.677 { 00:21:57.677 "name": "BaseBdev4", 00:21:57.677 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:57.677 "is_configured": true, 00:21:57.677 "data_offset": 2048, 00:21:57.677 "data_size": 63488 00:21:57.677 } 00:21:57.677 ] 00:21:57.677 }' 00:21:57.677 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.935 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:57.935 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.935 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.935 10:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.872 "name": "raid_bdev1", 00:21:58.872 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:21:58.872 "strip_size_kb": 64, 00:21:58.872 "state": "online", 00:21:58.872 "raid_level": "raid5f", 00:21:58.872 "superblock": true, 00:21:58.872 "num_base_bdevs": 4, 00:21:58.872 "num_base_bdevs_discovered": 4, 00:21:58.872 "num_base_bdevs_operational": 4, 00:21:58.872 "process": { 00:21:58.872 "type": "rebuild", 00:21:58.872 "target": "spare", 00:21:58.872 "progress": { 00:21:58.872 "blocks": 109440, 00:21:58.872 "percent": 57 00:21:58.872 } 00:21:58.872 }, 00:21:58.872 "base_bdevs_list": [ 00:21:58.872 { 00:21:58.872 "name": "spare", 00:21:58.872 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:21:58.872 "is_configured": true, 00:21:58.872 "data_offset": 2048, 00:21:58.872 "data_size": 63488 00:21:58.872 }, 00:21:58.872 { 00:21:58.872 "name": "BaseBdev2", 00:21:58.872 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:21:58.872 "is_configured": true, 00:21:58.872 "data_offset": 2048, 00:21:58.872 "data_size": 63488 00:21:58.872 }, 00:21:58.872 { 00:21:58.872 "name": "BaseBdev3", 00:21:58.872 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:21:58.872 "is_configured": true, 00:21:58.872 "data_offset": 2048, 00:21:58.872 "data_size": 63488 00:21:58.872 }, 00:21:58.872 { 00:21:58.872 "name": "BaseBdev4", 00:21:58.872 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:21:58.872 "is_configured": true, 00:21:58.872 "data_offset": 2048, 00:21:58.872 "data_size": 63488 00:21:58.872 } 00:21:58.872 ] 00:21:58.872 }' 00:21:58.872 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.131 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.131 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.131 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.131 10:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:00.068 "name": "raid_bdev1", 00:22:00.068 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:00.068 "strip_size_kb": 64, 00:22:00.068 "state": "online", 00:22:00.068 "raid_level": "raid5f", 00:22:00.068 "superblock": true, 00:22:00.068 "num_base_bdevs": 4, 00:22:00.068 "num_base_bdevs_discovered": 4, 00:22:00.068 "num_base_bdevs_operational": 4, 00:22:00.068 "process": { 00:22:00.068 "type": "rebuild", 00:22:00.068 "target": "spare", 00:22:00.068 "progress": { 00:22:00.068 "blocks": 132480, 00:22:00.068 "percent": 69 00:22:00.068 } 00:22:00.068 }, 00:22:00.068 "base_bdevs_list": [ 00:22:00.068 { 00:22:00.068 "name": "spare", 00:22:00.068 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:00.068 "is_configured": true, 00:22:00.068 "data_offset": 2048, 00:22:00.068 "data_size": 63488 00:22:00.068 }, 00:22:00.068 { 00:22:00.068 "name": "BaseBdev2", 00:22:00.068 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:00.068 "is_configured": true, 00:22:00.068 "data_offset": 2048, 00:22:00.068 "data_size": 63488 00:22:00.068 }, 00:22:00.068 { 00:22:00.068 "name": "BaseBdev3", 00:22:00.068 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:00.068 "is_configured": true, 00:22:00.068 "data_offset": 2048, 00:22:00.068 "data_size": 63488 00:22:00.068 }, 00:22:00.068 { 00:22:00.068 "name": "BaseBdev4", 00:22:00.068 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:00.068 "is_configured": true, 00:22:00.068 "data_offset": 2048, 00:22:00.068 "data_size": 63488 00:22:00.068 } 00:22:00.068 ] 00:22:00.068 }' 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.068 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:00.327 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.327 10:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:01.350 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:01.350 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.350 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:01.350 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:01.350 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:01.350 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:01.350 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.350 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.350 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.351 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.351 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.351 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:01.351 "name": "raid_bdev1", 00:22:01.351 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:01.351 "strip_size_kb": 64, 00:22:01.351 "state": "online", 00:22:01.351 "raid_level": "raid5f", 00:22:01.351 "superblock": true, 00:22:01.351 "num_base_bdevs": 4, 00:22:01.351 "num_base_bdevs_discovered": 4, 00:22:01.351 "num_base_bdevs_operational": 4, 00:22:01.351 "process": { 00:22:01.351 "type": "rebuild", 00:22:01.351 "target": "spare", 00:22:01.351 "progress": { 00:22:01.351 "blocks": 153600, 00:22:01.351 "percent": 80 00:22:01.351 } 00:22:01.351 }, 00:22:01.351 "base_bdevs_list": [ 00:22:01.351 { 00:22:01.351 "name": "spare", 00:22:01.351 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:01.351 "is_configured": true, 00:22:01.351 "data_offset": 2048, 00:22:01.351 "data_size": 63488 00:22:01.351 }, 00:22:01.351 { 00:22:01.351 "name": "BaseBdev2", 00:22:01.351 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:01.351 "is_configured": true, 00:22:01.351 "data_offset": 2048, 00:22:01.351 "data_size": 63488 00:22:01.351 }, 00:22:01.351 { 00:22:01.351 "name": "BaseBdev3", 00:22:01.351 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:01.351 "is_configured": true, 00:22:01.351 "data_offset": 2048, 00:22:01.351 "data_size": 63488 00:22:01.351 }, 00:22:01.351 { 00:22:01.351 "name": "BaseBdev4", 00:22:01.351 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:01.351 "is_configured": true, 00:22:01.351 "data_offset": 2048, 00:22:01.351 "data_size": 63488 00:22:01.351 } 00:22:01.351 ] 00:22:01.351 }' 00:22:01.351 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:01.351 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:01.351 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:01.351 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:01.351 10:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:02.288 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.547 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:02.547 "name": "raid_bdev1", 00:22:02.547 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:02.547 "strip_size_kb": 64, 00:22:02.547 "state": "online", 00:22:02.547 "raid_level": "raid5f", 00:22:02.547 "superblock": true, 00:22:02.547 "num_base_bdevs": 4, 00:22:02.547 "num_base_bdevs_discovered": 4, 00:22:02.547 "num_base_bdevs_operational": 4, 00:22:02.547 "process": { 00:22:02.547 "type": "rebuild", 00:22:02.547 "target": "spare", 00:22:02.547 "progress": { 00:22:02.547 "blocks": 176640, 00:22:02.547 "percent": 92 00:22:02.547 } 00:22:02.547 }, 00:22:02.547 "base_bdevs_list": [ 00:22:02.547 { 00:22:02.547 "name": "spare", 00:22:02.547 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:02.547 "is_configured": true, 00:22:02.547 "data_offset": 2048, 00:22:02.547 "data_size": 63488 00:22:02.547 }, 00:22:02.547 { 00:22:02.547 "name": "BaseBdev2", 00:22:02.547 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:02.547 "is_configured": true, 00:22:02.547 "data_offset": 2048, 00:22:02.547 "data_size": 63488 00:22:02.547 }, 00:22:02.547 { 00:22:02.547 "name": "BaseBdev3", 00:22:02.547 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:02.547 "is_configured": true, 00:22:02.547 "data_offset": 2048, 00:22:02.547 "data_size": 63488 00:22:02.547 }, 00:22:02.547 { 00:22:02.547 "name": "BaseBdev4", 00:22:02.547 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:02.547 "is_configured": true, 00:22:02.547 "data_offset": 2048, 00:22:02.547 "data_size": 63488 00:22:02.547 } 00:22:02.547 ] 00:22:02.547 }' 00:22:02.547 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:02.547 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.547 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:02.547 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.547 10:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:03.116 [2024-10-30 10:50:24.524017] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:03.116 [2024-10-30 10:50:24.524323] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:03.116 [2024-10-30 10:50:24.524509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.684 "name": "raid_bdev1", 00:22:03.684 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:03.684 "strip_size_kb": 64, 00:22:03.684 "state": "online", 00:22:03.684 "raid_level": "raid5f", 00:22:03.684 "superblock": true, 00:22:03.684 "num_base_bdevs": 4, 00:22:03.684 "num_base_bdevs_discovered": 4, 00:22:03.684 "num_base_bdevs_operational": 4, 00:22:03.684 "base_bdevs_list": [ 00:22:03.684 { 00:22:03.684 "name": "spare", 00:22:03.684 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:03.684 "is_configured": true, 00:22:03.684 "data_offset": 2048, 00:22:03.684 "data_size": 63488 00:22:03.684 }, 00:22:03.684 { 00:22:03.684 "name": "BaseBdev2", 00:22:03.684 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:03.684 "is_configured": true, 00:22:03.684 "data_offset": 2048, 00:22:03.684 "data_size": 63488 00:22:03.684 }, 00:22:03.684 { 00:22:03.684 "name": "BaseBdev3", 00:22:03.684 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:03.684 "is_configured": true, 00:22:03.684 "data_offset": 2048, 00:22:03.684 "data_size": 63488 00:22:03.684 }, 00:22:03.684 { 00:22:03.684 "name": "BaseBdev4", 00:22:03.684 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:03.684 "is_configured": true, 00:22:03.684 "data_offset": 2048, 00:22:03.684 "data_size": 63488 00:22:03.684 } 00:22:03.684 ] 00:22:03.684 }' 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:03.684 10:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.684 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.684 "name": "raid_bdev1", 00:22:03.685 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:03.685 "strip_size_kb": 64, 00:22:03.685 "state": "online", 00:22:03.685 "raid_level": "raid5f", 00:22:03.685 "superblock": true, 00:22:03.685 "num_base_bdevs": 4, 00:22:03.685 "num_base_bdevs_discovered": 4, 00:22:03.685 "num_base_bdevs_operational": 4, 00:22:03.685 "base_bdevs_list": [ 00:22:03.685 { 00:22:03.685 "name": "spare", 00:22:03.685 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:03.685 "is_configured": true, 00:22:03.685 "data_offset": 2048, 00:22:03.685 "data_size": 63488 00:22:03.685 }, 00:22:03.685 { 00:22:03.685 "name": "BaseBdev2", 00:22:03.685 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:03.685 "is_configured": true, 00:22:03.685 "data_offset": 2048, 00:22:03.685 "data_size": 63488 00:22:03.685 }, 00:22:03.685 { 00:22:03.685 "name": "BaseBdev3", 00:22:03.685 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:03.685 "is_configured": true, 00:22:03.685 "data_offset": 2048, 00:22:03.685 "data_size": 63488 00:22:03.685 }, 00:22:03.685 { 00:22:03.685 "name": "BaseBdev4", 00:22:03.685 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:03.685 "is_configured": true, 00:22:03.685 "data_offset": 2048, 00:22:03.685 "data_size": 63488 00:22:03.685 } 00:22:03.685 ] 00:22:03.685 }' 00:22:03.685 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.943 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.944 "name": "raid_bdev1", 00:22:03.944 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:03.944 "strip_size_kb": 64, 00:22:03.944 "state": "online", 00:22:03.944 "raid_level": "raid5f", 00:22:03.944 "superblock": true, 00:22:03.944 "num_base_bdevs": 4, 00:22:03.944 "num_base_bdevs_discovered": 4, 00:22:03.944 "num_base_bdevs_operational": 4, 00:22:03.944 "base_bdevs_list": [ 00:22:03.944 { 00:22:03.944 "name": "spare", 00:22:03.944 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:03.944 "is_configured": true, 00:22:03.944 "data_offset": 2048, 00:22:03.944 "data_size": 63488 00:22:03.944 }, 00:22:03.944 { 00:22:03.944 "name": "BaseBdev2", 00:22:03.944 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:03.944 "is_configured": true, 00:22:03.944 "data_offset": 2048, 00:22:03.944 "data_size": 63488 00:22:03.944 }, 00:22:03.944 { 00:22:03.944 "name": "BaseBdev3", 00:22:03.944 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:03.944 "is_configured": true, 00:22:03.944 "data_offset": 2048, 00:22:03.944 "data_size": 63488 00:22:03.944 }, 00:22:03.944 { 00:22:03.944 "name": "BaseBdev4", 00:22:03.944 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:03.944 "is_configured": true, 00:22:03.944 "data_offset": 2048, 00:22:03.944 "data_size": 63488 00:22:03.944 } 00:22:03.944 ] 00:22:03.944 }' 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.944 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.512 [2024-10-30 10:50:25.759698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:04.512 [2024-10-30 10:50:25.759734] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.512 [2024-10-30 10:50:25.759849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.512 [2024-10-30 10:50:25.759995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:04.512 [2024-10-30 10:50:25.760054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.512 10:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:04.771 /dev/nbd0 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.771 1+0 records in 00:22:04.771 1+0 records out 00:22:04.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022027 s, 18.6 MB/s 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:04.771 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:05.030 /dev/nbd1 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:05.030 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:05.031 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:05.031 1+0 records in 00:22:05.031 1+0 records out 00:22:05.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362525 s, 11.3 MB/s 00:22:05.031 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.031 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:22:05.031 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:05.031 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:05.031 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:22:05.031 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:05.031 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:05.031 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:05.289 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:05.289 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:05.289 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:05.289 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:05.289 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:05.289 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.289 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:05.547 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:05.548 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:05.548 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:05.548 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.548 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.548 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:05.548 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:05.548 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.548 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:05.548 10:50:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.807 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.066 [2024-10-30 10:50:27.277805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:06.066 [2024-10-30 10:50:27.277934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.066 [2024-10-30 10:50:27.277984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:06.066 [2024-10-30 10:50:27.278002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.066 [2024-10-30 10:50:27.281074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.066 [2024-10-30 10:50:27.281118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:06.066 [2024-10-30 10:50:27.281199] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:06.066 [2024-10-30 10:50:27.281264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:06.066 [2024-10-30 10:50:27.281507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:06.066 [2024-10-30 10:50:27.281643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:06.066 [2024-10-30 10:50:27.281761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:06.066 spare 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.066 [2024-10-30 10:50:27.381924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:06.066 [2024-10-30 10:50:27.381960] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:06.066 [2024-10-30 10:50:27.382290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:22:06.066 [2024-10-30 10:50:27.389220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:06.066 [2024-10-30 10:50:27.389266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:06.066 [2024-10-30 10:50:27.389499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.066 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.067 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.067 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.067 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.067 "name": "raid_bdev1", 00:22:06.067 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:06.067 "strip_size_kb": 64, 00:22:06.067 "state": "online", 00:22:06.067 "raid_level": "raid5f", 00:22:06.067 "superblock": true, 00:22:06.067 "num_base_bdevs": 4, 00:22:06.067 "num_base_bdevs_discovered": 4, 00:22:06.067 "num_base_bdevs_operational": 4, 00:22:06.067 "base_bdevs_list": [ 00:22:06.067 { 00:22:06.067 "name": "spare", 00:22:06.067 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:06.067 "is_configured": true, 00:22:06.067 "data_offset": 2048, 00:22:06.067 "data_size": 63488 00:22:06.067 }, 00:22:06.067 { 00:22:06.067 "name": "BaseBdev2", 00:22:06.067 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:06.067 "is_configured": true, 00:22:06.067 "data_offset": 2048, 00:22:06.067 "data_size": 63488 00:22:06.067 }, 00:22:06.067 { 00:22:06.067 "name": "BaseBdev3", 00:22:06.067 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:06.067 "is_configured": true, 00:22:06.067 "data_offset": 2048, 00:22:06.067 "data_size": 63488 00:22:06.067 }, 00:22:06.067 { 00:22:06.067 "name": "BaseBdev4", 00:22:06.067 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:06.067 "is_configured": true, 00:22:06.067 "data_offset": 2048, 00:22:06.067 "data_size": 63488 00:22:06.067 } 00:22:06.067 ] 00:22:06.067 }' 00:22:06.067 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.067 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.634 "name": "raid_bdev1", 00:22:06.634 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:06.634 "strip_size_kb": 64, 00:22:06.634 "state": "online", 00:22:06.634 "raid_level": "raid5f", 00:22:06.634 "superblock": true, 00:22:06.634 "num_base_bdevs": 4, 00:22:06.634 "num_base_bdevs_discovered": 4, 00:22:06.634 "num_base_bdevs_operational": 4, 00:22:06.634 "base_bdevs_list": [ 00:22:06.634 { 00:22:06.634 "name": "spare", 00:22:06.634 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:06.634 "is_configured": true, 00:22:06.634 "data_offset": 2048, 00:22:06.634 "data_size": 63488 00:22:06.634 }, 00:22:06.634 { 00:22:06.634 "name": "BaseBdev2", 00:22:06.634 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:06.634 "is_configured": true, 00:22:06.634 "data_offset": 2048, 00:22:06.634 "data_size": 63488 00:22:06.634 }, 00:22:06.634 { 00:22:06.634 "name": "BaseBdev3", 00:22:06.634 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:06.634 "is_configured": true, 00:22:06.634 "data_offset": 2048, 00:22:06.634 "data_size": 63488 00:22:06.634 }, 00:22:06.634 { 00:22:06.634 "name": "BaseBdev4", 00:22:06.634 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:06.634 "is_configured": true, 00:22:06.634 "data_offset": 2048, 00:22:06.634 "data_size": 63488 00:22:06.634 } 00:22:06.634 ] 00:22:06.634 }' 00:22:06.634 10:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.634 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.892 [2024-10-30 10:50:28.106088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.892 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.892 "name": "raid_bdev1", 00:22:06.892 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:06.892 "strip_size_kb": 64, 00:22:06.892 "state": "online", 00:22:06.892 "raid_level": "raid5f", 00:22:06.892 "superblock": true, 00:22:06.892 "num_base_bdevs": 4, 00:22:06.892 "num_base_bdevs_discovered": 3, 00:22:06.892 "num_base_bdevs_operational": 3, 00:22:06.892 "base_bdevs_list": [ 00:22:06.892 { 00:22:06.892 "name": null, 00:22:06.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.892 "is_configured": false, 00:22:06.892 "data_offset": 0, 00:22:06.892 "data_size": 63488 00:22:06.892 }, 00:22:06.892 { 00:22:06.892 "name": "BaseBdev2", 00:22:06.892 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:06.892 "is_configured": true, 00:22:06.892 "data_offset": 2048, 00:22:06.892 "data_size": 63488 00:22:06.892 }, 00:22:06.892 { 00:22:06.892 "name": "BaseBdev3", 00:22:06.892 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:06.892 "is_configured": true, 00:22:06.892 "data_offset": 2048, 00:22:06.892 "data_size": 63488 00:22:06.892 }, 00:22:06.892 { 00:22:06.892 "name": "BaseBdev4", 00:22:06.892 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:06.892 "is_configured": true, 00:22:06.892 "data_offset": 2048, 00:22:06.892 "data_size": 63488 00:22:06.892 } 00:22:06.892 ] 00:22:06.892 }' 00:22:06.893 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.893 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.457 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:07.457 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.457 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.457 [2024-10-30 10:50:28.634275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:07.457 [2024-10-30 10:50:28.634590] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:07.457 [2024-10-30 10:50:28.634617] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:07.457 [2024-10-30 10:50:28.634664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:07.457 [2024-10-30 10:50:28.648244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:22:07.457 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.457 10:50:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:07.457 [2024-10-30 10:50:28.657216] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:08.448 "name": "raid_bdev1", 00:22:08.448 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:08.448 "strip_size_kb": 64, 00:22:08.448 "state": "online", 00:22:08.448 "raid_level": "raid5f", 00:22:08.448 "superblock": true, 00:22:08.448 "num_base_bdevs": 4, 00:22:08.448 "num_base_bdevs_discovered": 4, 00:22:08.448 "num_base_bdevs_operational": 4, 00:22:08.448 "process": { 00:22:08.448 "type": "rebuild", 00:22:08.448 "target": "spare", 00:22:08.448 "progress": { 00:22:08.448 "blocks": 17280, 00:22:08.448 "percent": 9 00:22:08.448 } 00:22:08.448 }, 00:22:08.448 "base_bdevs_list": [ 00:22:08.448 { 00:22:08.448 "name": "spare", 00:22:08.448 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:08.448 "is_configured": true, 00:22:08.448 "data_offset": 2048, 00:22:08.448 "data_size": 63488 00:22:08.448 }, 00:22:08.448 { 00:22:08.448 "name": "BaseBdev2", 00:22:08.448 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:08.448 "is_configured": true, 00:22:08.448 "data_offset": 2048, 00:22:08.448 "data_size": 63488 00:22:08.448 }, 00:22:08.448 { 00:22:08.448 "name": "BaseBdev3", 00:22:08.448 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:08.448 "is_configured": true, 00:22:08.448 "data_offset": 2048, 00:22:08.448 "data_size": 63488 00:22:08.448 }, 00:22:08.448 { 00:22:08.448 "name": "BaseBdev4", 00:22:08.448 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:08.448 "is_configured": true, 00:22:08.448 "data_offset": 2048, 00:22:08.448 "data_size": 63488 00:22:08.448 } 00:22:08.448 ] 00:22:08.448 }' 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.448 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.448 [2024-10-30 10:50:29.838588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.448 [2024-10-30 10:50:29.868939] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:08.448 [2024-10-30 10:50:29.869037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.448 [2024-10-30 10:50:29.869064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:08.448 [2024-10-30 10:50:29.869082] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:08.728 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.729 "name": "raid_bdev1", 00:22:08.729 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:08.729 "strip_size_kb": 64, 00:22:08.729 "state": "online", 00:22:08.729 "raid_level": "raid5f", 00:22:08.729 "superblock": true, 00:22:08.729 "num_base_bdevs": 4, 00:22:08.729 "num_base_bdevs_discovered": 3, 00:22:08.729 "num_base_bdevs_operational": 3, 00:22:08.729 "base_bdevs_list": [ 00:22:08.729 { 00:22:08.729 "name": null, 00:22:08.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.729 "is_configured": false, 00:22:08.729 "data_offset": 0, 00:22:08.729 "data_size": 63488 00:22:08.729 }, 00:22:08.729 { 00:22:08.729 "name": "BaseBdev2", 00:22:08.729 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:08.729 "is_configured": true, 00:22:08.729 "data_offset": 2048, 00:22:08.729 "data_size": 63488 00:22:08.729 }, 00:22:08.729 { 00:22:08.729 "name": "BaseBdev3", 00:22:08.729 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:08.729 "is_configured": true, 00:22:08.729 "data_offset": 2048, 00:22:08.729 "data_size": 63488 00:22:08.729 }, 00:22:08.729 { 00:22:08.729 "name": "BaseBdev4", 00:22:08.729 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:08.729 "is_configured": true, 00:22:08.729 "data_offset": 2048, 00:22:08.729 "data_size": 63488 00:22:08.729 } 00:22:08.729 ] 00:22:08.729 }' 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.729 10:50:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.294 10:50:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:09.294 10:50:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.294 10:50:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.294 [2024-10-30 10:50:30.461977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:09.294 [2024-10-30 10:50:30.462097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.294 [2024-10-30 10:50:30.462136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:09.294 [2024-10-30 10:50:30.462166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.295 [2024-10-30 10:50:30.462771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.295 [2024-10-30 10:50:30.462813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:09.295 [2024-10-30 10:50:30.462936] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:09.295 [2024-10-30 10:50:30.462961] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:09.295 [2024-10-30 10:50:30.463004] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:09.295 [2024-10-30 10:50:30.463038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.295 [2024-10-30 10:50:30.477621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:22:09.295 spare 00:22:09.295 10:50:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.295 10:50:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:09.295 [2024-10-30 10:50:30.486697] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:10.230 "name": "raid_bdev1", 00:22:10.230 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:10.230 "strip_size_kb": 64, 00:22:10.230 "state": "online", 00:22:10.230 "raid_level": "raid5f", 00:22:10.230 "superblock": true, 00:22:10.230 "num_base_bdevs": 4, 00:22:10.230 "num_base_bdevs_discovered": 4, 00:22:10.230 "num_base_bdevs_operational": 4, 00:22:10.230 "process": { 00:22:10.230 "type": "rebuild", 00:22:10.230 "target": "spare", 00:22:10.230 "progress": { 00:22:10.230 "blocks": 17280, 00:22:10.230 "percent": 9 00:22:10.230 } 00:22:10.230 }, 00:22:10.230 "base_bdevs_list": [ 00:22:10.230 { 00:22:10.230 "name": "spare", 00:22:10.230 "uuid": "3d0718d3-4e7b-5569-a095-694947fa495a", 00:22:10.230 "is_configured": true, 00:22:10.230 "data_offset": 2048, 00:22:10.230 "data_size": 63488 00:22:10.230 }, 00:22:10.230 { 00:22:10.230 "name": "BaseBdev2", 00:22:10.230 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:10.230 "is_configured": true, 00:22:10.230 "data_offset": 2048, 00:22:10.230 "data_size": 63488 00:22:10.230 }, 00:22:10.230 { 00:22:10.230 "name": "BaseBdev3", 00:22:10.230 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:10.230 "is_configured": true, 00:22:10.230 "data_offset": 2048, 00:22:10.230 "data_size": 63488 00:22:10.230 }, 00:22:10.230 { 00:22:10.230 "name": "BaseBdev4", 00:22:10.230 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:10.230 "is_configured": true, 00:22:10.230 "data_offset": 2048, 00:22:10.230 "data_size": 63488 00:22:10.230 } 00:22:10.230 ] 00:22:10.230 }' 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.230 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.230 [2024-10-30 10:50:31.668069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:10.230 [2024-10-30 10:50:31.697871] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:10.230 [2024-10-30 10:50:31.697954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.230 [2024-10-30 10:50:31.697999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:10.230 [2024-10-30 10:50:31.698013] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.489 "name": "raid_bdev1", 00:22:10.489 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:10.489 "strip_size_kb": 64, 00:22:10.489 "state": "online", 00:22:10.489 "raid_level": "raid5f", 00:22:10.489 "superblock": true, 00:22:10.489 "num_base_bdevs": 4, 00:22:10.489 "num_base_bdevs_discovered": 3, 00:22:10.489 "num_base_bdevs_operational": 3, 00:22:10.489 "base_bdevs_list": [ 00:22:10.489 { 00:22:10.489 "name": null, 00:22:10.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.489 "is_configured": false, 00:22:10.489 "data_offset": 0, 00:22:10.489 "data_size": 63488 00:22:10.489 }, 00:22:10.489 { 00:22:10.489 "name": "BaseBdev2", 00:22:10.489 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:10.489 "is_configured": true, 00:22:10.489 "data_offset": 2048, 00:22:10.489 "data_size": 63488 00:22:10.489 }, 00:22:10.489 { 00:22:10.489 "name": "BaseBdev3", 00:22:10.489 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:10.489 "is_configured": true, 00:22:10.489 "data_offset": 2048, 00:22:10.489 "data_size": 63488 00:22:10.489 }, 00:22:10.489 { 00:22:10.489 "name": "BaseBdev4", 00:22:10.489 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:10.489 "is_configured": true, 00:22:10.489 "data_offset": 2048, 00:22:10.489 "data_size": 63488 00:22:10.489 } 00:22:10.489 ] 00:22:10.489 }' 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.489 10:50:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:11.056 "name": "raid_bdev1", 00:22:11.056 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:11.056 "strip_size_kb": 64, 00:22:11.056 "state": "online", 00:22:11.056 "raid_level": "raid5f", 00:22:11.056 "superblock": true, 00:22:11.056 "num_base_bdevs": 4, 00:22:11.056 "num_base_bdevs_discovered": 3, 00:22:11.056 "num_base_bdevs_operational": 3, 00:22:11.056 "base_bdevs_list": [ 00:22:11.056 { 00:22:11.056 "name": null, 00:22:11.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.056 "is_configured": false, 00:22:11.056 "data_offset": 0, 00:22:11.056 "data_size": 63488 00:22:11.056 }, 00:22:11.056 { 00:22:11.056 "name": "BaseBdev2", 00:22:11.056 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:11.056 "is_configured": true, 00:22:11.056 "data_offset": 2048, 00:22:11.056 "data_size": 63488 00:22:11.056 }, 00:22:11.056 { 00:22:11.056 "name": "BaseBdev3", 00:22:11.056 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:11.056 "is_configured": true, 00:22:11.056 "data_offset": 2048, 00:22:11.056 "data_size": 63488 00:22:11.056 }, 00:22:11.056 { 00:22:11.056 "name": "BaseBdev4", 00:22:11.056 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:11.056 "is_configured": true, 00:22:11.056 "data_offset": 2048, 00:22:11.056 "data_size": 63488 00:22:11.056 } 00:22:11.056 ] 00:22:11.056 }' 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.056 [2024-10-30 10:50:32.408747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:11.056 [2024-10-30 10:50:32.409815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.056 [2024-10-30 10:50:32.409860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:11.056 [2024-10-30 10:50:32.409877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.056 [2024-10-30 10:50:32.410456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.056 [2024-10-30 10:50:32.410488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:11.056 [2024-10-30 10:50:32.410591] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:11.056 [2024-10-30 10:50:32.410612] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:11.056 [2024-10-30 10:50:32.410628] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:11.056 [2024-10-30 10:50:32.410641] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:11.056 BaseBdev1 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.056 10:50:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.990 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.248 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.248 "name": "raid_bdev1", 00:22:12.248 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:12.248 "strip_size_kb": 64, 00:22:12.248 "state": "online", 00:22:12.248 "raid_level": "raid5f", 00:22:12.248 "superblock": true, 00:22:12.248 "num_base_bdevs": 4, 00:22:12.248 "num_base_bdevs_discovered": 3, 00:22:12.248 "num_base_bdevs_operational": 3, 00:22:12.248 "base_bdevs_list": [ 00:22:12.248 { 00:22:12.248 "name": null, 00:22:12.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.248 "is_configured": false, 00:22:12.248 "data_offset": 0, 00:22:12.248 "data_size": 63488 00:22:12.248 }, 00:22:12.248 { 00:22:12.249 "name": "BaseBdev2", 00:22:12.249 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:12.249 "is_configured": true, 00:22:12.249 "data_offset": 2048, 00:22:12.249 "data_size": 63488 00:22:12.249 }, 00:22:12.249 { 00:22:12.249 "name": "BaseBdev3", 00:22:12.249 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:12.249 "is_configured": true, 00:22:12.249 "data_offset": 2048, 00:22:12.249 "data_size": 63488 00:22:12.249 }, 00:22:12.249 { 00:22:12.249 "name": "BaseBdev4", 00:22:12.249 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:12.249 "is_configured": true, 00:22:12.249 "data_offset": 2048, 00:22:12.249 "data_size": 63488 00:22:12.249 } 00:22:12.249 ] 00:22:12.249 }' 00:22:12.249 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.249 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.507 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.765 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:12.765 "name": "raid_bdev1", 00:22:12.765 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:12.765 "strip_size_kb": 64, 00:22:12.765 "state": "online", 00:22:12.765 "raid_level": "raid5f", 00:22:12.765 "superblock": true, 00:22:12.765 "num_base_bdevs": 4, 00:22:12.765 "num_base_bdevs_discovered": 3, 00:22:12.765 "num_base_bdevs_operational": 3, 00:22:12.765 "base_bdevs_list": [ 00:22:12.765 { 00:22:12.765 "name": null, 00:22:12.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.765 "is_configured": false, 00:22:12.765 "data_offset": 0, 00:22:12.765 "data_size": 63488 00:22:12.765 }, 00:22:12.765 { 00:22:12.765 "name": "BaseBdev2", 00:22:12.765 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:12.765 "is_configured": true, 00:22:12.765 "data_offset": 2048, 00:22:12.765 "data_size": 63488 00:22:12.765 }, 00:22:12.765 { 00:22:12.765 "name": "BaseBdev3", 00:22:12.765 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:12.765 "is_configured": true, 00:22:12.765 "data_offset": 2048, 00:22:12.765 "data_size": 63488 00:22:12.765 }, 00:22:12.765 { 00:22:12.765 "name": "BaseBdev4", 00:22:12.765 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:12.765 "is_configured": true, 00:22:12.765 "data_offset": 2048, 00:22:12.765 "data_size": 63488 00:22:12.765 } 00:22:12.765 ] 00:22:12.765 }' 00:22:12.765 10:50:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.765 [2024-10-30 10:50:34.105376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.765 [2024-10-30 10:50:34.105583] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:12.765 [2024-10-30 10:50:34.105606] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:12.765 request: 00:22:12.765 { 00:22:12.765 "base_bdev": "BaseBdev1", 00:22:12.765 "raid_bdev": "raid_bdev1", 00:22:12.765 "method": "bdev_raid_add_base_bdev", 00:22:12.765 "req_id": 1 00:22:12.765 } 00:22:12.765 Got JSON-RPC error response 00:22:12.765 response: 00:22:12.765 { 00:22:12.765 "code": -22, 00:22:12.765 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:12.765 } 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:12.765 10:50:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.700 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.958 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.958 "name": "raid_bdev1", 00:22:13.958 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:13.958 "strip_size_kb": 64, 00:22:13.958 "state": "online", 00:22:13.958 "raid_level": "raid5f", 00:22:13.958 "superblock": true, 00:22:13.958 "num_base_bdevs": 4, 00:22:13.958 "num_base_bdevs_discovered": 3, 00:22:13.958 "num_base_bdevs_operational": 3, 00:22:13.958 "base_bdevs_list": [ 00:22:13.958 { 00:22:13.958 "name": null, 00:22:13.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.958 "is_configured": false, 00:22:13.958 "data_offset": 0, 00:22:13.958 "data_size": 63488 00:22:13.958 }, 00:22:13.958 { 00:22:13.958 "name": "BaseBdev2", 00:22:13.958 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:13.958 "is_configured": true, 00:22:13.958 "data_offset": 2048, 00:22:13.958 "data_size": 63488 00:22:13.958 }, 00:22:13.958 { 00:22:13.958 "name": "BaseBdev3", 00:22:13.958 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:13.958 "is_configured": true, 00:22:13.958 "data_offset": 2048, 00:22:13.958 "data_size": 63488 00:22:13.958 }, 00:22:13.958 { 00:22:13.958 "name": "BaseBdev4", 00:22:13.958 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:13.958 "is_configured": true, 00:22:13.958 "data_offset": 2048, 00:22:13.958 "data_size": 63488 00:22:13.958 } 00:22:13.958 ] 00:22:13.958 }' 00:22:13.959 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.959 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.216 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:14.216 "name": "raid_bdev1", 00:22:14.216 "uuid": "ea50f969-3835-4788-8175-ea5d8f0ebf14", 00:22:14.216 "strip_size_kb": 64, 00:22:14.216 "state": "online", 00:22:14.216 "raid_level": "raid5f", 00:22:14.216 "superblock": true, 00:22:14.216 "num_base_bdevs": 4, 00:22:14.216 "num_base_bdevs_discovered": 3, 00:22:14.216 "num_base_bdevs_operational": 3, 00:22:14.216 "base_bdevs_list": [ 00:22:14.216 { 00:22:14.216 "name": null, 00:22:14.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.216 "is_configured": false, 00:22:14.216 "data_offset": 0, 00:22:14.216 "data_size": 63488 00:22:14.216 }, 00:22:14.216 { 00:22:14.216 "name": "BaseBdev2", 00:22:14.216 "uuid": "a44a89f9-6aaf-5b77-866d-3c03d2a99f08", 00:22:14.217 "is_configured": true, 00:22:14.217 "data_offset": 2048, 00:22:14.217 "data_size": 63488 00:22:14.217 }, 00:22:14.217 { 00:22:14.217 "name": "BaseBdev3", 00:22:14.217 "uuid": "51008c91-7326-5b71-9c21-aa0f28d7f764", 00:22:14.217 "is_configured": true, 00:22:14.217 "data_offset": 2048, 00:22:14.217 "data_size": 63488 00:22:14.217 }, 00:22:14.217 { 00:22:14.217 "name": "BaseBdev4", 00:22:14.217 "uuid": "6913a8d6-afa1-55a1-9035-b2f7247b4641", 00:22:14.217 "is_configured": true, 00:22:14.217 "data_offset": 2048, 00:22:14.217 "data_size": 63488 00:22:14.217 } 00:22:14.217 ] 00:22:14.217 }' 00:22:14.217 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85678 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85678 ']' 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85678 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85678 00:22:14.475 killing process with pid 85678 00:22:14.475 Received shutdown signal, test time was about 60.000000 seconds 00:22:14.475 00:22:14.475 Latency(us) 00:22:14.475 [2024-10-30T10:50:35.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.475 [2024-10-30T10:50:35.945Z] =================================================================================================================== 00:22:14.475 [2024-10-30T10:50:35.945Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85678' 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85678 00:22:14.475 [2024-10-30 10:50:35.825254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:14.475 10:50:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85678 00:22:14.475 [2024-10-30 10:50:35.825401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.475 [2024-10-30 10:50:35.825499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.475 [2024-10-30 10:50:35.825535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:15.041 [2024-10-30 10:50:36.284877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:16.049 10:50:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:22:16.049 00:22:16.049 real 0m28.741s 00:22:16.049 user 0m37.405s 00:22:16.049 sys 0m2.946s 00:22:16.049 10:50:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:16.049 ************************************ 00:22:16.049 END TEST raid5f_rebuild_test_sb 00:22:16.049 ************************************ 00:22:16.049 10:50:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.049 10:50:37 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:22:16.050 10:50:37 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:22:16.050 10:50:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:16.050 10:50:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:16.050 10:50:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:16.050 ************************************ 00:22:16.050 START TEST raid_state_function_test_sb_4k 00:22:16.050 ************************************ 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:16.050 Process raid pid: 86500 00:22:16.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86500 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86500' 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86500 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86500 ']' 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:16.050 10:50:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:16.311 [2024-10-30 10:50:37.534551] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:22:16.311 [2024-10-30 10:50:37.534903] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.311 [2024-10-30 10:50:37.725734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.569 [2024-10-30 10:50:37.875366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.826 [2024-10-30 10:50:38.089150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.826 [2024-10-30 10:50:38.089397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.392 [2024-10-30 10:50:38.610364] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:17.392 [2024-10-30 10:50:38.610635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:17.392 [2024-10-30 10:50:38.610763] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:17.392 [2024-10-30 10:50:38.610903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.392 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.393 "name": "Existed_Raid", 00:22:17.393 "uuid": "add2e85e-07de-46cb-a5e9-0dad79602dce", 00:22:17.393 "strip_size_kb": 0, 00:22:17.393 "state": "configuring", 00:22:17.393 "raid_level": "raid1", 00:22:17.393 "superblock": true, 00:22:17.393 "num_base_bdevs": 2, 00:22:17.393 "num_base_bdevs_discovered": 0, 00:22:17.393 "num_base_bdevs_operational": 2, 00:22:17.393 "base_bdevs_list": [ 00:22:17.393 { 00:22:17.393 "name": "BaseBdev1", 00:22:17.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.393 "is_configured": false, 00:22:17.393 "data_offset": 0, 00:22:17.393 "data_size": 0 00:22:17.393 }, 00:22:17.393 { 00:22:17.393 "name": "BaseBdev2", 00:22:17.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.393 "is_configured": false, 00:22:17.393 "data_offset": 0, 00:22:17.393 "data_size": 0 00:22:17.393 } 00:22:17.393 ] 00:22:17.393 }' 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.393 10:50:38 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.959 [2024-10-30 10:50:39.130493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:17.959 [2024-10-30 10:50:39.130699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.959 [2024-10-30 10:50:39.138472] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:17.959 [2024-10-30 10:50:39.138687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:17.959 [2024-10-30 10:50:39.138825] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:17.959 [2024-10-30 10:50:39.138889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.959 [2024-10-30 10:50:39.185576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.959 BaseBdev1 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.959 [ 00:22:17.959 { 00:22:17.959 "name": "BaseBdev1", 00:22:17.959 "aliases": [ 00:22:17.959 "9fe1f421-aec0-47d0-a4a7-48274b594964" 00:22:17.959 ], 00:22:17.959 "product_name": "Malloc disk", 00:22:17.959 "block_size": 4096, 00:22:17.959 "num_blocks": 8192, 00:22:17.959 "uuid": "9fe1f421-aec0-47d0-a4a7-48274b594964", 00:22:17.959 "assigned_rate_limits": { 00:22:17.959 "rw_ios_per_sec": 0, 00:22:17.959 "rw_mbytes_per_sec": 0, 00:22:17.959 "r_mbytes_per_sec": 0, 00:22:17.959 "w_mbytes_per_sec": 0 00:22:17.959 }, 00:22:17.959 "claimed": true, 00:22:17.959 "claim_type": "exclusive_write", 00:22:17.959 "zoned": false, 00:22:17.959 "supported_io_types": { 00:22:17.959 "read": true, 00:22:17.959 "write": true, 00:22:17.959 "unmap": true, 00:22:17.959 "flush": true, 00:22:17.959 "reset": true, 00:22:17.959 "nvme_admin": false, 00:22:17.959 "nvme_io": false, 00:22:17.959 "nvme_io_md": false, 00:22:17.959 "write_zeroes": true, 00:22:17.959 "zcopy": true, 00:22:17.959 "get_zone_info": false, 00:22:17.959 "zone_management": false, 00:22:17.959 "zone_append": false, 00:22:17.959 "compare": false, 00:22:17.959 "compare_and_write": false, 00:22:17.959 "abort": true, 00:22:17.959 "seek_hole": false, 00:22:17.959 "seek_data": false, 00:22:17.959 "copy": true, 00:22:17.959 "nvme_iov_md": false 00:22:17.959 }, 00:22:17.959 "memory_domains": [ 00:22:17.959 { 00:22:17.959 "dma_device_id": "system", 00:22:17.959 "dma_device_type": 1 00:22:17.959 }, 00:22:17.959 { 00:22:17.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.959 "dma_device_type": 2 00:22:17.959 } 00:22:17.959 ], 00:22:17.959 "driver_specific": {} 00:22:17.959 } 00:22:17.959 ] 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.959 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.959 "name": "Existed_Raid", 00:22:17.959 "uuid": "c6a7b397-581b-4316-92f2-8bbfbbadce67", 00:22:17.959 "strip_size_kb": 0, 00:22:17.959 "state": "configuring", 00:22:17.959 "raid_level": "raid1", 00:22:17.959 "superblock": true, 00:22:17.959 "num_base_bdevs": 2, 00:22:17.959 "num_base_bdevs_discovered": 1, 00:22:17.959 "num_base_bdevs_operational": 2, 00:22:17.959 "base_bdevs_list": [ 00:22:17.959 { 00:22:17.959 "name": "BaseBdev1", 00:22:17.959 "uuid": "9fe1f421-aec0-47d0-a4a7-48274b594964", 00:22:17.959 "is_configured": true, 00:22:17.959 "data_offset": 256, 00:22:17.959 "data_size": 7936 00:22:17.959 }, 00:22:17.959 { 00:22:17.959 "name": "BaseBdev2", 00:22:17.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.959 "is_configured": false, 00:22:17.959 "data_offset": 0, 00:22:17.959 "data_size": 0 00:22:17.959 } 00:22:17.960 ] 00:22:17.960 }' 00:22:17.960 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.960 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:18.525 [2024-10-30 10:50:39.741828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:18.525 [2024-10-30 10:50:39.741887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:18.525 [2024-10-30 10:50:39.749883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:18.525 [2024-10-30 10:50:39.752459] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:18.525 [2024-10-30 10:50:39.752670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.525 "name": "Existed_Raid", 00:22:18.525 "uuid": "e178b7eb-15d0-4193-8fd6-7d601272032f", 00:22:18.525 "strip_size_kb": 0, 00:22:18.525 "state": "configuring", 00:22:18.525 "raid_level": "raid1", 00:22:18.525 "superblock": true, 00:22:18.525 "num_base_bdevs": 2, 00:22:18.525 "num_base_bdevs_discovered": 1, 00:22:18.525 "num_base_bdevs_operational": 2, 00:22:18.525 "base_bdevs_list": [ 00:22:18.525 { 00:22:18.525 "name": "BaseBdev1", 00:22:18.525 "uuid": "9fe1f421-aec0-47d0-a4a7-48274b594964", 00:22:18.525 "is_configured": true, 00:22:18.525 "data_offset": 256, 00:22:18.525 "data_size": 7936 00:22:18.525 }, 00:22:18.525 { 00:22:18.525 "name": "BaseBdev2", 00:22:18.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.525 "is_configured": false, 00:22:18.525 "data_offset": 0, 00:22:18.525 "data_size": 0 00:22:18.525 } 00:22:18.525 ] 00:22:18.525 }' 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.525 10:50:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.091 [2024-10-30 10:50:40.333300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:19.091 BaseBdev2 00:22:19.091 [2024-10-30 10:50:40.333940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:19.091 [2024-10-30 10:50:40.333967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:19.091 [2024-10-30 10:50:40.334336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:19.091 [2024-10-30 10:50:40.334567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:19.091 [2024-10-30 10:50:40.334590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:19.091 [2024-10-30 10:50:40.334782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.091 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.091 [ 00:22:19.091 { 00:22:19.091 "name": "BaseBdev2", 00:22:19.091 "aliases": [ 00:22:19.091 "f2e2be0b-84c8-4670-a3ea-7e3e3feb7e48" 00:22:19.091 ], 00:22:19.091 "product_name": "Malloc disk", 00:22:19.091 "block_size": 4096, 00:22:19.091 "num_blocks": 8192, 00:22:19.091 "uuid": "f2e2be0b-84c8-4670-a3ea-7e3e3feb7e48", 00:22:19.091 "assigned_rate_limits": { 00:22:19.091 "rw_ios_per_sec": 0, 00:22:19.091 "rw_mbytes_per_sec": 0, 00:22:19.091 "r_mbytes_per_sec": 0, 00:22:19.091 "w_mbytes_per_sec": 0 00:22:19.091 }, 00:22:19.091 "claimed": true, 00:22:19.091 "claim_type": "exclusive_write", 00:22:19.091 "zoned": false, 00:22:19.091 "supported_io_types": { 00:22:19.091 "read": true, 00:22:19.091 "write": true, 00:22:19.091 "unmap": true, 00:22:19.091 "flush": true, 00:22:19.091 "reset": true, 00:22:19.091 "nvme_admin": false, 00:22:19.091 "nvme_io": false, 00:22:19.091 "nvme_io_md": false, 00:22:19.091 "write_zeroes": true, 00:22:19.091 "zcopy": true, 00:22:19.091 "get_zone_info": false, 00:22:19.092 "zone_management": false, 00:22:19.092 "zone_append": false, 00:22:19.092 "compare": false, 00:22:19.092 "compare_and_write": false, 00:22:19.092 "abort": true, 00:22:19.092 "seek_hole": false, 00:22:19.092 "seek_data": false, 00:22:19.092 "copy": true, 00:22:19.092 "nvme_iov_md": false 00:22:19.092 }, 00:22:19.092 "memory_domains": [ 00:22:19.092 { 00:22:19.092 "dma_device_id": "system", 00:22:19.092 "dma_device_type": 1 00:22:19.092 }, 00:22:19.092 { 00:22:19.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.092 "dma_device_type": 2 00:22:19.092 } 00:22:19.092 ], 00:22:19.092 "driver_specific": {} 00:22:19.092 } 00:22:19.092 ] 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.092 "name": "Existed_Raid", 00:22:19.092 "uuid": "e178b7eb-15d0-4193-8fd6-7d601272032f", 00:22:19.092 "strip_size_kb": 0, 00:22:19.092 "state": "online", 00:22:19.092 "raid_level": "raid1", 00:22:19.092 "superblock": true, 00:22:19.092 "num_base_bdevs": 2, 00:22:19.092 "num_base_bdevs_discovered": 2, 00:22:19.092 "num_base_bdevs_operational": 2, 00:22:19.092 "base_bdevs_list": [ 00:22:19.092 { 00:22:19.092 "name": "BaseBdev1", 00:22:19.092 "uuid": "9fe1f421-aec0-47d0-a4a7-48274b594964", 00:22:19.092 "is_configured": true, 00:22:19.092 "data_offset": 256, 00:22:19.092 "data_size": 7936 00:22:19.092 }, 00:22:19.092 { 00:22:19.092 "name": "BaseBdev2", 00:22:19.092 "uuid": "f2e2be0b-84c8-4670-a3ea-7e3e3feb7e48", 00:22:19.092 "is_configured": true, 00:22:19.092 "data_offset": 256, 00:22:19.092 "data_size": 7936 00:22:19.092 } 00:22:19.092 ] 00:22:19.092 }' 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.092 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:19.660 [2024-10-30 10:50:40.925893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.660 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:19.660 "name": "Existed_Raid", 00:22:19.660 "aliases": [ 00:22:19.660 "e178b7eb-15d0-4193-8fd6-7d601272032f" 00:22:19.660 ], 00:22:19.660 "product_name": "Raid Volume", 00:22:19.660 "block_size": 4096, 00:22:19.660 "num_blocks": 7936, 00:22:19.660 "uuid": "e178b7eb-15d0-4193-8fd6-7d601272032f", 00:22:19.660 "assigned_rate_limits": { 00:22:19.660 "rw_ios_per_sec": 0, 00:22:19.660 "rw_mbytes_per_sec": 0, 00:22:19.660 "r_mbytes_per_sec": 0, 00:22:19.660 "w_mbytes_per_sec": 0 00:22:19.660 }, 00:22:19.660 "claimed": false, 00:22:19.660 "zoned": false, 00:22:19.660 "supported_io_types": { 00:22:19.660 "read": true, 00:22:19.660 "write": true, 00:22:19.660 "unmap": false, 00:22:19.660 "flush": false, 00:22:19.660 "reset": true, 00:22:19.660 "nvme_admin": false, 00:22:19.660 "nvme_io": false, 00:22:19.660 "nvme_io_md": false, 00:22:19.660 "write_zeroes": true, 00:22:19.660 "zcopy": false, 00:22:19.660 "get_zone_info": false, 00:22:19.660 "zone_management": false, 00:22:19.660 "zone_append": false, 00:22:19.660 "compare": false, 00:22:19.660 "compare_and_write": false, 00:22:19.660 "abort": false, 00:22:19.660 "seek_hole": false, 00:22:19.660 "seek_data": false, 00:22:19.660 "copy": false, 00:22:19.660 "nvme_iov_md": false 00:22:19.660 }, 00:22:19.660 "memory_domains": [ 00:22:19.660 { 00:22:19.660 "dma_device_id": "system", 00:22:19.660 "dma_device_type": 1 00:22:19.660 }, 00:22:19.660 { 00:22:19.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.661 "dma_device_type": 2 00:22:19.661 }, 00:22:19.661 { 00:22:19.661 "dma_device_id": "system", 00:22:19.661 "dma_device_type": 1 00:22:19.661 }, 00:22:19.661 { 00:22:19.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.661 "dma_device_type": 2 00:22:19.661 } 00:22:19.661 ], 00:22:19.661 "driver_specific": { 00:22:19.661 "raid": { 00:22:19.661 "uuid": "e178b7eb-15d0-4193-8fd6-7d601272032f", 00:22:19.661 "strip_size_kb": 0, 00:22:19.661 "state": "online", 00:22:19.661 "raid_level": "raid1", 00:22:19.661 "superblock": true, 00:22:19.661 "num_base_bdevs": 2, 00:22:19.661 "num_base_bdevs_discovered": 2, 00:22:19.661 "num_base_bdevs_operational": 2, 00:22:19.661 "base_bdevs_list": [ 00:22:19.661 { 00:22:19.661 "name": "BaseBdev1", 00:22:19.661 "uuid": "9fe1f421-aec0-47d0-a4a7-48274b594964", 00:22:19.661 "is_configured": true, 00:22:19.661 "data_offset": 256, 00:22:19.661 "data_size": 7936 00:22:19.661 }, 00:22:19.661 { 00:22:19.661 "name": "BaseBdev2", 00:22:19.661 "uuid": "f2e2be0b-84c8-4670-a3ea-7e3e3feb7e48", 00:22:19.661 "is_configured": true, 00:22:19.661 "data_offset": 256, 00:22:19.661 "data_size": 7936 00:22:19.661 } 00:22:19.661 ] 00:22:19.661 } 00:22:19.661 } 00:22:19.661 }' 00:22:19.661 10:50:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.661 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:19.661 BaseBdev2' 00:22:19.661 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.661 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:19.661 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.661 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:19.661 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.661 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.661 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.661 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.920 [2024-10-30 10:50:41.205734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.920 "name": "Existed_Raid", 00:22:19.920 "uuid": "e178b7eb-15d0-4193-8fd6-7d601272032f", 00:22:19.920 "strip_size_kb": 0, 00:22:19.920 "state": "online", 00:22:19.920 "raid_level": "raid1", 00:22:19.920 "superblock": true, 00:22:19.920 "num_base_bdevs": 2, 00:22:19.920 "num_base_bdevs_discovered": 1, 00:22:19.920 "num_base_bdevs_operational": 1, 00:22:19.920 "base_bdevs_list": [ 00:22:19.920 { 00:22:19.920 "name": null, 00:22:19.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.920 "is_configured": false, 00:22:19.920 "data_offset": 0, 00:22:19.920 "data_size": 7936 00:22:19.920 }, 00:22:19.920 { 00:22:19.920 "name": "BaseBdev2", 00:22:19.920 "uuid": "f2e2be0b-84c8-4670-a3ea-7e3e3feb7e48", 00:22:19.920 "is_configured": true, 00:22:19.920 "data_offset": 256, 00:22:19.920 "data_size": 7936 00:22:19.920 } 00:22:19.920 ] 00:22:19.920 }' 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.920 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.487 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:20.487 [2024-10-30 10:50:41.864181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:20.487 [2024-10-30 10:50:41.864449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.487 [2024-10-30 10:50:41.957024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.487 [2024-10-30 10:50:41.957097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:20.487 [2024-10-30 10:50:41.957117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:20.746 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.746 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:20.746 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:20.746 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.746 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.746 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:20.746 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:20.746 10:50:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86500 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86500 ']' 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86500 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86500 00:22:20.746 killing process with pid 86500 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86500' 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86500 00:22:20.746 [2024-10-30 10:50:42.044171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:20.746 10:50:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86500 00:22:20.746 [2024-10-30 10:50:42.060294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:21.683 ************************************ 00:22:21.683 END TEST raid_state_function_test_sb_4k 00:22:21.683 ************************************ 00:22:21.683 10:50:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:22:21.683 00:22:21.683 real 0m5.719s 00:22:21.683 user 0m8.657s 00:22:21.683 sys 0m0.794s 00:22:21.683 10:50:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:21.683 10:50:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:21.942 10:50:43 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:22:21.942 10:50:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:21.942 10:50:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:21.942 10:50:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:21.942 ************************************ 00:22:21.942 START TEST raid_superblock_test_4k 00:22:21.942 ************************************ 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86754 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86754 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86754 ']' 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:21.942 10:50:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:21.942 [2024-10-30 10:50:43.311380] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:22:21.942 [2024-10-30 10:50:43.311853] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86754 ] 00:22:22.201 [2024-10-30 10:50:43.494671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.201 [2024-10-30 10:50:43.630903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.461 [2024-10-30 10:50:43.845297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.461 [2024-10-30 10:50:43.845368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:23.028 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:23.028 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.029 malloc1 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.029 [2024-10-30 10:50:44.415716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:23.029 [2024-10-30 10:50:44.415944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.029 [2024-10-30 10:50:44.416012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:23.029 [2024-10-30 10:50:44.416031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.029 [2024-10-30 10:50:44.418868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.029 pt1 00:22:23.029 [2024-10-30 10:50:44.419073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.029 malloc2 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.029 [2024-10-30 10:50:44.471240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:23.029 [2024-10-30 10:50:44.471449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.029 [2024-10-30 10:50:44.471611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:23.029 [2024-10-30 10:50:44.471771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.029 [2024-10-30 10:50:44.474808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.029 [2024-10-30 10:50:44.475002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:23.029 pt2 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.029 [2024-10-30 10:50:44.483349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:23.029 [2024-10-30 10:50:44.486364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:23.029 [2024-10-30 10:50:44.486713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:23.029 [2024-10-30 10:50:44.486854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:23.029 [2024-10-30 10:50:44.487281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:23.029 [2024-10-30 10:50:44.487619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:23.029 [2024-10-30 10:50:44.487756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:23.029 [2024-10-30 10:50:44.488121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.029 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.294 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.294 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.294 "name": "raid_bdev1", 00:22:23.294 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:23.294 "strip_size_kb": 0, 00:22:23.294 "state": "online", 00:22:23.294 "raid_level": "raid1", 00:22:23.294 "superblock": true, 00:22:23.294 "num_base_bdevs": 2, 00:22:23.294 "num_base_bdevs_discovered": 2, 00:22:23.294 "num_base_bdevs_operational": 2, 00:22:23.294 "base_bdevs_list": [ 00:22:23.294 { 00:22:23.294 "name": "pt1", 00:22:23.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:23.294 "is_configured": true, 00:22:23.294 "data_offset": 256, 00:22:23.294 "data_size": 7936 00:22:23.294 }, 00:22:23.294 { 00:22:23.294 "name": "pt2", 00:22:23.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:23.294 "is_configured": true, 00:22:23.294 "data_offset": 256, 00:22:23.294 "data_size": 7936 00:22:23.294 } 00:22:23.294 ] 00:22:23.294 }' 00:22:23.294 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.294 10:50:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.574 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:23.574 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:23.574 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:23.574 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:23.574 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:23.574 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:23.574 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:23.574 10:50:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:23.574 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.574 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.574 [2024-10-30 10:50:45.008649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.574 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:23.833 "name": "raid_bdev1", 00:22:23.833 "aliases": [ 00:22:23.833 "2ac1c38f-99c3-47f5-9322-f015b4323362" 00:22:23.833 ], 00:22:23.833 "product_name": "Raid Volume", 00:22:23.833 "block_size": 4096, 00:22:23.833 "num_blocks": 7936, 00:22:23.833 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:23.833 "assigned_rate_limits": { 00:22:23.833 "rw_ios_per_sec": 0, 00:22:23.833 "rw_mbytes_per_sec": 0, 00:22:23.833 "r_mbytes_per_sec": 0, 00:22:23.833 "w_mbytes_per_sec": 0 00:22:23.833 }, 00:22:23.833 "claimed": false, 00:22:23.833 "zoned": false, 00:22:23.833 "supported_io_types": { 00:22:23.833 "read": true, 00:22:23.833 "write": true, 00:22:23.833 "unmap": false, 00:22:23.833 "flush": false, 00:22:23.833 "reset": true, 00:22:23.833 "nvme_admin": false, 00:22:23.833 "nvme_io": false, 00:22:23.833 "nvme_io_md": false, 00:22:23.833 "write_zeroes": true, 00:22:23.833 "zcopy": false, 00:22:23.833 "get_zone_info": false, 00:22:23.833 "zone_management": false, 00:22:23.833 "zone_append": false, 00:22:23.833 "compare": false, 00:22:23.833 "compare_and_write": false, 00:22:23.833 "abort": false, 00:22:23.833 "seek_hole": false, 00:22:23.833 "seek_data": false, 00:22:23.833 "copy": false, 00:22:23.833 "nvme_iov_md": false 00:22:23.833 }, 00:22:23.833 "memory_domains": [ 00:22:23.833 { 00:22:23.833 "dma_device_id": "system", 00:22:23.833 "dma_device_type": 1 00:22:23.833 }, 00:22:23.833 { 00:22:23.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.833 "dma_device_type": 2 00:22:23.833 }, 00:22:23.833 { 00:22:23.833 "dma_device_id": "system", 00:22:23.833 "dma_device_type": 1 00:22:23.833 }, 00:22:23.833 { 00:22:23.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.833 "dma_device_type": 2 00:22:23.833 } 00:22:23.833 ], 00:22:23.833 "driver_specific": { 00:22:23.833 "raid": { 00:22:23.833 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:23.833 "strip_size_kb": 0, 00:22:23.833 "state": "online", 00:22:23.833 "raid_level": "raid1", 00:22:23.833 "superblock": true, 00:22:23.833 "num_base_bdevs": 2, 00:22:23.833 "num_base_bdevs_discovered": 2, 00:22:23.833 "num_base_bdevs_operational": 2, 00:22:23.833 "base_bdevs_list": [ 00:22:23.833 { 00:22:23.833 "name": "pt1", 00:22:23.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:23.833 "is_configured": true, 00:22:23.833 "data_offset": 256, 00:22:23.833 "data_size": 7936 00:22:23.833 }, 00:22:23.833 { 00:22:23.833 "name": "pt2", 00:22:23.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:23.833 "is_configured": true, 00:22:23.833 "data_offset": 256, 00:22:23.833 "data_size": 7936 00:22:23.833 } 00:22:23.833 ] 00:22:23.833 } 00:22:23.833 } 00:22:23.833 }' 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:23.833 pt2' 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:23.833 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:23.834 [2024-10-30 10:50:45.264686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.834 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2ac1c38f-99c3-47f5-9322-f015b4323362 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 2ac1c38f-99c3-47f5-9322-f015b4323362 ']' 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.092 [2024-10-30 10:50:45.308317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.092 [2024-10-30 10:50:45.308490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:24.092 [2024-10-30 10:50:45.308694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:24.092 [2024-10-30 10:50:45.308876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:24.092 [2024-10-30 10:50:45.308910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.092 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.092 [2024-10-30 10:50:45.452386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:24.092 [2024-10-30 10:50:45.455084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:24.092 [2024-10-30 10:50:45.455192] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:24.093 [2024-10-30 10:50:45.455275] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:24.093 [2024-10-30 10:50:45.455303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.093 [2024-10-30 10:50:45.455319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:24.093 request: 00:22:24.093 { 00:22:24.093 "name": "raid_bdev1", 00:22:24.093 "raid_level": "raid1", 00:22:24.093 "base_bdevs": [ 00:22:24.093 "malloc1", 00:22:24.093 "malloc2" 00:22:24.093 ], 00:22:24.093 "superblock": false, 00:22:24.093 "method": "bdev_raid_create", 00:22:24.093 "req_id": 1 00:22:24.093 } 00:22:24.093 Got JSON-RPC error response 00:22:24.093 response: 00:22:24.093 { 00:22:24.093 "code": -17, 00:22:24.093 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:24.093 } 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.093 [2024-10-30 10:50:45.520383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:24.093 [2024-10-30 10:50:45.520599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.093 [2024-10-30 10:50:45.520668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:24.093 [2024-10-30 10:50:45.520808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.093 [2024-10-30 10:50:45.523772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.093 pt1 00:22:24.093 [2024-10-30 10:50:45.523951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:24.093 [2024-10-30 10:50:45.524077] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:24.093 [2024-10-30 10:50:45.524163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.093 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.351 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.351 "name": "raid_bdev1", 00:22:24.351 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:24.351 "strip_size_kb": 0, 00:22:24.351 "state": "configuring", 00:22:24.351 "raid_level": "raid1", 00:22:24.351 "superblock": true, 00:22:24.351 "num_base_bdevs": 2, 00:22:24.351 "num_base_bdevs_discovered": 1, 00:22:24.352 "num_base_bdevs_operational": 2, 00:22:24.352 "base_bdevs_list": [ 00:22:24.352 { 00:22:24.352 "name": "pt1", 00:22:24.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:24.352 "is_configured": true, 00:22:24.352 "data_offset": 256, 00:22:24.352 "data_size": 7936 00:22:24.352 }, 00:22:24.352 { 00:22:24.352 "name": null, 00:22:24.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:24.352 "is_configured": false, 00:22:24.352 "data_offset": 256, 00:22:24.352 "data_size": 7936 00:22:24.352 } 00:22:24.352 ] 00:22:24.352 }' 00:22:24.352 10:50:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.352 10:50:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.610 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:24.610 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:24.610 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:24.610 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:24.610 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.610 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.610 [2024-10-30 10:50:46.060596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:24.610 [2024-10-30 10:50:46.060812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.610 [2024-10-30 10:50:46.060888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:24.611 [2024-10-30 10:50:46.061180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.611 [2024-10-30 10:50:46.061774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.611 [2024-10-30 10:50:46.061816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:24.611 [2024-10-30 10:50:46.061918] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:24.611 [2024-10-30 10:50:46.061959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:24.611 [2024-10-30 10:50:46.062126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:24.611 [2024-10-30 10:50:46.062148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:24.611 [2024-10-30 10:50:46.062454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:24.611 [2024-10-30 10:50:46.062672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:24.611 [2024-10-30 10:50:46.062688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:24.611 [2024-10-30 10:50:46.062875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.611 pt2 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:24.611 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.870 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.870 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.870 "name": "raid_bdev1", 00:22:24.870 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:24.870 "strip_size_kb": 0, 00:22:24.870 "state": "online", 00:22:24.870 "raid_level": "raid1", 00:22:24.870 "superblock": true, 00:22:24.870 "num_base_bdevs": 2, 00:22:24.870 "num_base_bdevs_discovered": 2, 00:22:24.870 "num_base_bdevs_operational": 2, 00:22:24.870 "base_bdevs_list": [ 00:22:24.870 { 00:22:24.870 "name": "pt1", 00:22:24.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:24.870 "is_configured": true, 00:22:24.870 "data_offset": 256, 00:22:24.870 "data_size": 7936 00:22:24.870 }, 00:22:24.870 { 00:22:24.870 "name": "pt2", 00:22:24.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:24.870 "is_configured": true, 00:22:24.870 "data_offset": 256, 00:22:24.870 "data_size": 7936 00:22:24.870 } 00:22:24.870 ] 00:22:24.870 }' 00:22:24.870 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.870 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.129 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.129 [2024-10-30 10:50:46.581044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:25.388 "name": "raid_bdev1", 00:22:25.388 "aliases": [ 00:22:25.388 "2ac1c38f-99c3-47f5-9322-f015b4323362" 00:22:25.388 ], 00:22:25.388 "product_name": "Raid Volume", 00:22:25.388 "block_size": 4096, 00:22:25.388 "num_blocks": 7936, 00:22:25.388 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:25.388 "assigned_rate_limits": { 00:22:25.388 "rw_ios_per_sec": 0, 00:22:25.388 "rw_mbytes_per_sec": 0, 00:22:25.388 "r_mbytes_per_sec": 0, 00:22:25.388 "w_mbytes_per_sec": 0 00:22:25.388 }, 00:22:25.388 "claimed": false, 00:22:25.388 "zoned": false, 00:22:25.388 "supported_io_types": { 00:22:25.388 "read": true, 00:22:25.388 "write": true, 00:22:25.388 "unmap": false, 00:22:25.388 "flush": false, 00:22:25.388 "reset": true, 00:22:25.388 "nvme_admin": false, 00:22:25.388 "nvme_io": false, 00:22:25.388 "nvme_io_md": false, 00:22:25.388 "write_zeroes": true, 00:22:25.388 "zcopy": false, 00:22:25.388 "get_zone_info": false, 00:22:25.388 "zone_management": false, 00:22:25.388 "zone_append": false, 00:22:25.388 "compare": false, 00:22:25.388 "compare_and_write": false, 00:22:25.388 "abort": false, 00:22:25.388 "seek_hole": false, 00:22:25.388 "seek_data": false, 00:22:25.388 "copy": false, 00:22:25.388 "nvme_iov_md": false 00:22:25.388 }, 00:22:25.388 "memory_domains": [ 00:22:25.388 { 00:22:25.388 "dma_device_id": "system", 00:22:25.388 "dma_device_type": 1 00:22:25.388 }, 00:22:25.388 { 00:22:25.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.388 "dma_device_type": 2 00:22:25.388 }, 00:22:25.388 { 00:22:25.388 "dma_device_id": "system", 00:22:25.388 "dma_device_type": 1 00:22:25.388 }, 00:22:25.388 { 00:22:25.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.388 "dma_device_type": 2 00:22:25.388 } 00:22:25.388 ], 00:22:25.388 "driver_specific": { 00:22:25.388 "raid": { 00:22:25.388 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:25.388 "strip_size_kb": 0, 00:22:25.388 "state": "online", 00:22:25.388 "raid_level": "raid1", 00:22:25.388 "superblock": true, 00:22:25.388 "num_base_bdevs": 2, 00:22:25.388 "num_base_bdevs_discovered": 2, 00:22:25.388 "num_base_bdevs_operational": 2, 00:22:25.388 "base_bdevs_list": [ 00:22:25.388 { 00:22:25.388 "name": "pt1", 00:22:25.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:25.388 "is_configured": true, 00:22:25.388 "data_offset": 256, 00:22:25.388 "data_size": 7936 00:22:25.388 }, 00:22:25.388 { 00:22:25.388 "name": "pt2", 00:22:25.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:25.388 "is_configured": true, 00:22:25.388 "data_offset": 256, 00:22:25.388 "data_size": 7936 00:22:25.388 } 00:22:25.388 ] 00:22:25.388 } 00:22:25.388 } 00:22:25.388 }' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:25.388 pt2' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.388 [2024-10-30 10:50:46.837111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.388 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 2ac1c38f-99c3-47f5-9322-f015b4323362 '!=' 2ac1c38f-99c3-47f5-9322-f015b4323362 ']' 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.647 [2024-10-30 10:50:46.884921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.647 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.647 "name": "raid_bdev1", 00:22:25.647 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:25.647 "strip_size_kb": 0, 00:22:25.647 "state": "online", 00:22:25.647 "raid_level": "raid1", 00:22:25.647 "superblock": true, 00:22:25.647 "num_base_bdevs": 2, 00:22:25.647 "num_base_bdevs_discovered": 1, 00:22:25.647 "num_base_bdevs_operational": 1, 00:22:25.647 "base_bdevs_list": [ 00:22:25.647 { 00:22:25.647 "name": null, 00:22:25.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.647 "is_configured": false, 00:22:25.647 "data_offset": 0, 00:22:25.647 "data_size": 7936 00:22:25.647 }, 00:22:25.647 { 00:22:25.647 "name": "pt2", 00:22:25.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:25.647 "is_configured": true, 00:22:25.647 "data_offset": 256, 00:22:25.647 "data_size": 7936 00:22:25.647 } 00:22:25.647 ] 00:22:25.647 }' 00:22:25.648 10:50:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.648 10:50:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:25.906 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:25.906 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.906 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.166 [2024-10-30 10:50:47.377109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:26.166 [2024-10-30 10:50:47.377287] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:26.166 [2024-10-30 10:50:47.377407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.166 [2024-10-30 10:50:47.377471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.166 [2024-10-30 10:50:47.377491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.166 [2024-10-30 10:50:47.445077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:26.166 [2024-10-30 10:50:47.445281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.166 [2024-10-30 10:50:47.445429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:26.166 [2024-10-30 10:50:47.445598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.166 [2024-10-30 10:50:47.448604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.166 pt2 00:22:26.166 [2024-10-30 10:50:47.448802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:26.166 [2024-10-30 10:50:47.448916] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:26.166 [2024-10-30 10:50:47.449002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:26.166 [2024-10-30 10:50:47.449137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:26.166 [2024-10-30 10:50:47.449160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:26.166 [2024-10-30 10:50:47.449468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:26.166 [2024-10-30 10:50:47.449671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:26.166 [2024-10-30 10:50:47.449687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:26.166 [2024-10-30 10:50:47.449938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.166 "name": "raid_bdev1", 00:22:26.166 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:26.166 "strip_size_kb": 0, 00:22:26.166 "state": "online", 00:22:26.166 "raid_level": "raid1", 00:22:26.166 "superblock": true, 00:22:26.166 "num_base_bdevs": 2, 00:22:26.166 "num_base_bdevs_discovered": 1, 00:22:26.166 "num_base_bdevs_operational": 1, 00:22:26.166 "base_bdevs_list": [ 00:22:26.166 { 00:22:26.166 "name": null, 00:22:26.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.166 "is_configured": false, 00:22:26.166 "data_offset": 256, 00:22:26.166 "data_size": 7936 00:22:26.166 }, 00:22:26.166 { 00:22:26.166 "name": "pt2", 00:22:26.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:26.166 "is_configured": true, 00:22:26.166 "data_offset": 256, 00:22:26.166 "data_size": 7936 00:22:26.166 } 00:22:26.166 ] 00:22:26.166 }' 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.166 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.734 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:26.734 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.734 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.734 [2024-10-30 10:50:47.977385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:26.734 [2024-10-30 10:50:47.977422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:26.734 [2024-10-30 10:50:47.977551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.734 [2024-10-30 10:50:47.977635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.734 [2024-10-30 10:50:47.977651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:26.734 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.734 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.734 10:50:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:26.734 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.734 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.734 10:50:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.734 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:26.734 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:26.734 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:26.734 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:26.734 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.734 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.734 [2024-10-30 10:50:48.041464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:26.734 [2024-10-30 10:50:48.041738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.734 [2024-10-30 10:50:48.041785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:26.734 [2024-10-30 10:50:48.041802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.734 [2024-10-30 10:50:48.044950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.734 [2024-10-30 10:50:48.045012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:26.734 [2024-10-30 10:50:48.045152] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:26.735 [2024-10-30 10:50:48.045214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:26.735 [2024-10-30 10:50:48.045398] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:26.735 [2024-10-30 10:50:48.045416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:26.735 [2024-10-30 10:50:48.045439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:26.735 [2024-10-30 10:50:48.045516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:26.735 pt1 00:22:26.735 [2024-10-30 10:50:48.045671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:26.735 [2024-10-30 10:50:48.045695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:26.735 [2024-10-30 10:50:48.046022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:26.735 [2024-10-30 10:50:48.046228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:26.735 [2024-10-30 10:50:48.046249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:26.735 [2024-10-30 10:50:48.046432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:26.735 "name": "raid_bdev1", 00:22:26.735 "uuid": "2ac1c38f-99c3-47f5-9322-f015b4323362", 00:22:26.735 "strip_size_kb": 0, 00:22:26.735 "state": "online", 00:22:26.735 "raid_level": "raid1", 00:22:26.735 "superblock": true, 00:22:26.735 "num_base_bdevs": 2, 00:22:26.735 "num_base_bdevs_discovered": 1, 00:22:26.735 "num_base_bdevs_operational": 1, 00:22:26.735 "base_bdevs_list": [ 00:22:26.735 { 00:22:26.735 "name": null, 00:22:26.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.735 "is_configured": false, 00:22:26.735 "data_offset": 256, 00:22:26.735 "data_size": 7936 00:22:26.735 }, 00:22:26.735 { 00:22:26.735 "name": "pt2", 00:22:26.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:26.735 "is_configured": true, 00:22:26.735 "data_offset": 256, 00:22:26.735 "data_size": 7936 00:22:26.735 } 00:22:26.735 ] 00:22:26.735 }' 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:26.735 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:27.302 [2024-10-30 10:50:48.621969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 2ac1c38f-99c3-47f5-9322-f015b4323362 '!=' 2ac1c38f-99c3-47f5-9322-f015b4323362 ']' 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86754 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86754 ']' 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86754 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86754 00:22:27.302 killing process with pid 86754 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86754' 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86754 00:22:27.302 [2024-10-30 10:50:48.698731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:27.302 10:50:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86754 00:22:27.302 [2024-10-30 10:50:48.698830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:27.302 [2024-10-30 10:50:48.698893] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:27.302 [2024-10-30 10:50:48.698914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:27.560 [2024-10-30 10:50:48.893808] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:28.932 10:50:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:22:28.932 00:22:28.932 real 0m6.779s 00:22:28.932 user 0m10.696s 00:22:28.932 sys 0m0.977s 00:22:28.932 10:50:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:28.932 ************************************ 00:22:28.932 END TEST raid_superblock_test_4k 00:22:28.932 ************************************ 00:22:28.932 10:50:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:22:28.932 10:50:50 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:22:28.932 10:50:50 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:22:28.932 10:50:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:28.932 10:50:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:28.932 10:50:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:28.932 ************************************ 00:22:28.932 START TEST raid_rebuild_test_sb_4k 00:22:28.932 ************************************ 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87082 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87082 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 87082 ']' 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:28.932 10:50:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:28.932 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:28.932 Zero copy mechanism will not be used. 00:22:28.932 [2024-10-30 10:50:50.157586] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:22:28.932 [2024-10-30 10:50:50.157747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87082 ] 00:22:28.932 [2024-10-30 10:50:50.346487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.190 [2024-10-30 10:50:50.502087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.449 [2024-10-30 10:50:50.719085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.449 [2024-10-30 10:50:50.719130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.708 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:29.708 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:22:29.708 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:29.708 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:22:29.708 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.708 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.968 BaseBdev1_malloc 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.968 [2024-10-30 10:50:51.217655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:29.968 [2024-10-30 10:50:51.217761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.968 [2024-10-30 10:50:51.217795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:29.968 [2024-10-30 10:50:51.217813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.968 [2024-10-30 10:50:51.220629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.968 [2024-10-30 10:50:51.220812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:29.968 BaseBdev1 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.968 BaseBdev2_malloc 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.968 [2024-10-30 10:50:51.275072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:29.968 [2024-10-30 10:50:51.275146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.968 [2024-10-30 10:50:51.275186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:29.968 [2024-10-30 10:50:51.275208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.968 [2024-10-30 10:50:51.278116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.968 [2024-10-30 10:50:51.278164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:29.968 BaseBdev2 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.968 spare_malloc 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.968 spare_delay 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.968 [2024-10-30 10:50:51.348268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:29.968 [2024-10-30 10:50:51.348377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.968 [2024-10-30 10:50:51.348408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:29.968 [2024-10-30 10:50:51.348426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.968 [2024-10-30 10:50:51.351377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.968 [2024-10-30 10:50:51.351559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:29.968 spare 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.968 [2024-10-30 10:50:51.360488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:29.968 [2024-10-30 10:50:51.363160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:29.968 [2024-10-30 10:50:51.363421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:29.968 [2024-10-30 10:50:51.363448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:29.968 [2024-10-30 10:50:51.363831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:29.968 [2024-10-30 10:50:51.364231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:29.968 [2024-10-30 10:50:51.364285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:29.968 [2024-10-30 10:50:51.364559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.968 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.968 "name": "raid_bdev1", 00:22:29.968 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:29.968 "strip_size_kb": 0, 00:22:29.968 "state": "online", 00:22:29.968 "raid_level": "raid1", 00:22:29.968 "superblock": true, 00:22:29.968 "num_base_bdevs": 2, 00:22:29.968 "num_base_bdevs_discovered": 2, 00:22:29.968 "num_base_bdevs_operational": 2, 00:22:29.968 "base_bdevs_list": [ 00:22:29.968 { 00:22:29.968 "name": "BaseBdev1", 00:22:29.968 "uuid": "aa672f49-264f-543b-aad5-3b9f91002ecb", 00:22:29.968 "is_configured": true, 00:22:29.968 "data_offset": 256, 00:22:29.969 "data_size": 7936 00:22:29.969 }, 00:22:29.969 { 00:22:29.969 "name": "BaseBdev2", 00:22:29.969 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:29.969 "is_configured": true, 00:22:29.969 "data_offset": 256, 00:22:29.969 "data_size": 7936 00:22:29.969 } 00:22:29.969 ] 00:22:29.969 }' 00:22:29.969 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.969 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:30.536 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:30.536 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:30.536 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.537 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:30.537 [2024-10-30 10:50:51.925149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:30.537 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.537 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:30.537 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:30.537 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.537 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.537 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:30.537 10:50:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.795 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:30.796 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:31.055 [2024-10-30 10:50:52.317013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:31.055 /dev/nbd0 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:31.055 1+0 records in 00:22:31.055 1+0 records out 00:22:31.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360065 s, 11.4 MB/s 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:31.055 10:50:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:31.992 7936+0 records in 00:22:31.992 7936+0 records out 00:22:31.992 32505856 bytes (33 MB, 31 MiB) copied, 0.876399 s, 37.1 MB/s 00:22:31.992 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:31.992 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:31.992 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:31.992 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:31.992 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:31.992 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:31.992 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:32.251 [2024-10-30 10:50:53.564635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:32.251 [2024-10-30 10:50:53.576717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.251 "name": "raid_bdev1", 00:22:32.251 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:32.251 "strip_size_kb": 0, 00:22:32.251 "state": "online", 00:22:32.251 "raid_level": "raid1", 00:22:32.251 "superblock": true, 00:22:32.251 "num_base_bdevs": 2, 00:22:32.251 "num_base_bdevs_discovered": 1, 00:22:32.251 "num_base_bdevs_operational": 1, 00:22:32.251 "base_bdevs_list": [ 00:22:32.251 { 00:22:32.251 "name": null, 00:22:32.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.251 "is_configured": false, 00:22:32.251 "data_offset": 0, 00:22:32.251 "data_size": 7936 00:22:32.251 }, 00:22:32.251 { 00:22:32.251 "name": "BaseBdev2", 00:22:32.251 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:32.251 "is_configured": true, 00:22:32.251 "data_offset": 256, 00:22:32.251 "data_size": 7936 00:22:32.251 } 00:22:32.251 ] 00:22:32.251 }' 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.251 10:50:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:32.820 10:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:32.820 10:50:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.820 10:50:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:32.820 [2024-10-30 10:50:54.068954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:32.820 [2024-10-30 10:50:54.087243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:32.820 10:50:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.820 10:50:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:32.820 [2024-10-30 10:50:54.089708] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:33.759 "name": "raid_bdev1", 00:22:33.759 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:33.759 "strip_size_kb": 0, 00:22:33.759 "state": "online", 00:22:33.759 "raid_level": "raid1", 00:22:33.759 "superblock": true, 00:22:33.759 "num_base_bdevs": 2, 00:22:33.759 "num_base_bdevs_discovered": 2, 00:22:33.759 "num_base_bdevs_operational": 2, 00:22:33.759 "process": { 00:22:33.759 "type": "rebuild", 00:22:33.759 "target": "spare", 00:22:33.759 "progress": { 00:22:33.759 "blocks": 2560, 00:22:33.759 "percent": 32 00:22:33.759 } 00:22:33.759 }, 00:22:33.759 "base_bdevs_list": [ 00:22:33.759 { 00:22:33.759 "name": "spare", 00:22:33.759 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:33.759 "is_configured": true, 00:22:33.759 "data_offset": 256, 00:22:33.759 "data_size": 7936 00:22:33.759 }, 00:22:33.759 { 00:22:33.759 "name": "BaseBdev2", 00:22:33.759 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:33.759 "is_configured": true, 00:22:33.759 "data_offset": 256, 00:22:33.759 "data_size": 7936 00:22:33.759 } 00:22:33.759 ] 00:22:33.759 }' 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.759 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:34.020 [2024-10-30 10:50:55.251146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:34.020 [2024-10-30 10:50:55.298429] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:34.020 [2024-10-30 10:50:55.298514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.020 [2024-10-30 10:50:55.298539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:34.020 [2024-10-30 10:50:55.298560] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.020 "name": "raid_bdev1", 00:22:34.020 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:34.020 "strip_size_kb": 0, 00:22:34.020 "state": "online", 00:22:34.020 "raid_level": "raid1", 00:22:34.020 "superblock": true, 00:22:34.020 "num_base_bdevs": 2, 00:22:34.020 "num_base_bdevs_discovered": 1, 00:22:34.020 "num_base_bdevs_operational": 1, 00:22:34.020 "base_bdevs_list": [ 00:22:34.020 { 00:22:34.020 "name": null, 00:22:34.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.020 "is_configured": false, 00:22:34.020 "data_offset": 0, 00:22:34.020 "data_size": 7936 00:22:34.020 }, 00:22:34.020 { 00:22:34.020 "name": "BaseBdev2", 00:22:34.020 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:34.020 "is_configured": true, 00:22:34.020 "data_offset": 256, 00:22:34.020 "data_size": 7936 00:22:34.020 } 00:22:34.020 ] 00:22:34.020 }' 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.020 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:34.589 "name": "raid_bdev1", 00:22:34.589 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:34.589 "strip_size_kb": 0, 00:22:34.589 "state": "online", 00:22:34.589 "raid_level": "raid1", 00:22:34.589 "superblock": true, 00:22:34.589 "num_base_bdevs": 2, 00:22:34.589 "num_base_bdevs_discovered": 1, 00:22:34.589 "num_base_bdevs_operational": 1, 00:22:34.589 "base_bdevs_list": [ 00:22:34.589 { 00:22:34.589 "name": null, 00:22:34.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.589 "is_configured": false, 00:22:34.589 "data_offset": 0, 00:22:34.589 "data_size": 7936 00:22:34.589 }, 00:22:34.589 { 00:22:34.589 "name": "BaseBdev2", 00:22:34.589 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:34.589 "is_configured": true, 00:22:34.589 "data_offset": 256, 00:22:34.589 "data_size": 7936 00:22:34.589 } 00:22:34.589 ] 00:22:34.589 }' 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:34.589 10:50:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:34.589 10:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:34.589 10:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:34.589 10:50:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.589 10:50:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:34.589 [2024-10-30 10:50:56.014693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:34.589 [2024-10-30 10:50:56.030794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:34.589 10:50:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.589 10:50:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:34.589 [2024-10-30 10:50:56.033316] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.968 "name": "raid_bdev1", 00:22:35.968 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:35.968 "strip_size_kb": 0, 00:22:35.968 "state": "online", 00:22:35.968 "raid_level": "raid1", 00:22:35.968 "superblock": true, 00:22:35.968 "num_base_bdevs": 2, 00:22:35.968 "num_base_bdevs_discovered": 2, 00:22:35.968 "num_base_bdevs_operational": 2, 00:22:35.968 "process": { 00:22:35.968 "type": "rebuild", 00:22:35.968 "target": "spare", 00:22:35.968 "progress": { 00:22:35.968 "blocks": 2560, 00:22:35.968 "percent": 32 00:22:35.968 } 00:22:35.968 }, 00:22:35.968 "base_bdevs_list": [ 00:22:35.968 { 00:22:35.968 "name": "spare", 00:22:35.968 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:35.968 "is_configured": true, 00:22:35.968 "data_offset": 256, 00:22:35.968 "data_size": 7936 00:22:35.968 }, 00:22:35.968 { 00:22:35.968 "name": "BaseBdev2", 00:22:35.968 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:35.968 "is_configured": true, 00:22:35.968 "data_offset": 256, 00:22:35.968 "data_size": 7936 00:22:35.968 } 00:22:35.968 ] 00:22:35.968 }' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:35.968 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=731 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.968 "name": "raid_bdev1", 00:22:35.968 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:35.968 "strip_size_kb": 0, 00:22:35.968 "state": "online", 00:22:35.968 "raid_level": "raid1", 00:22:35.968 "superblock": true, 00:22:35.968 "num_base_bdevs": 2, 00:22:35.968 "num_base_bdevs_discovered": 2, 00:22:35.968 "num_base_bdevs_operational": 2, 00:22:35.968 "process": { 00:22:35.968 "type": "rebuild", 00:22:35.968 "target": "spare", 00:22:35.968 "progress": { 00:22:35.968 "blocks": 2816, 00:22:35.968 "percent": 35 00:22:35.968 } 00:22:35.968 }, 00:22:35.968 "base_bdevs_list": [ 00:22:35.968 { 00:22:35.968 "name": "spare", 00:22:35.968 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:35.968 "is_configured": true, 00:22:35.968 "data_offset": 256, 00:22:35.968 "data_size": 7936 00:22:35.968 }, 00:22:35.968 { 00:22:35.968 "name": "BaseBdev2", 00:22:35.968 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:35.968 "is_configured": true, 00:22:35.968 "data_offset": 256, 00:22:35.968 "data_size": 7936 00:22:35.968 } 00:22:35.968 ] 00:22:35.968 }' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.968 10:50:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.907 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.167 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:37.167 "name": "raid_bdev1", 00:22:37.167 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:37.167 "strip_size_kb": 0, 00:22:37.167 "state": "online", 00:22:37.167 "raid_level": "raid1", 00:22:37.167 "superblock": true, 00:22:37.167 "num_base_bdevs": 2, 00:22:37.167 "num_base_bdevs_discovered": 2, 00:22:37.167 "num_base_bdevs_operational": 2, 00:22:37.167 "process": { 00:22:37.167 "type": "rebuild", 00:22:37.167 "target": "spare", 00:22:37.167 "progress": { 00:22:37.167 "blocks": 5888, 00:22:37.167 "percent": 74 00:22:37.167 } 00:22:37.167 }, 00:22:37.167 "base_bdevs_list": [ 00:22:37.167 { 00:22:37.167 "name": "spare", 00:22:37.167 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:37.167 "is_configured": true, 00:22:37.167 "data_offset": 256, 00:22:37.167 "data_size": 7936 00:22:37.167 }, 00:22:37.167 { 00:22:37.167 "name": "BaseBdev2", 00:22:37.167 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:37.167 "is_configured": true, 00:22:37.167 "data_offset": 256, 00:22:37.167 "data_size": 7936 00:22:37.167 } 00:22:37.167 ] 00:22:37.167 }' 00:22:37.167 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:37.167 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:37.167 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:37.167 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:37.167 10:50:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:37.736 [2024-10-30 10:50:59.154812] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:37.736 [2024-10-30 10:50:59.154925] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:37.736 [2024-10-30 10:50:59.155103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.303 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:38.303 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:38.303 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:38.303 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:38.303 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:38.303 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:38.304 "name": "raid_bdev1", 00:22:38.304 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:38.304 "strip_size_kb": 0, 00:22:38.304 "state": "online", 00:22:38.304 "raid_level": "raid1", 00:22:38.304 "superblock": true, 00:22:38.304 "num_base_bdevs": 2, 00:22:38.304 "num_base_bdevs_discovered": 2, 00:22:38.304 "num_base_bdevs_operational": 2, 00:22:38.304 "base_bdevs_list": [ 00:22:38.304 { 00:22:38.304 "name": "spare", 00:22:38.304 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:38.304 "is_configured": true, 00:22:38.304 "data_offset": 256, 00:22:38.304 "data_size": 7936 00:22:38.304 }, 00:22:38.304 { 00:22:38.304 "name": "BaseBdev2", 00:22:38.304 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:38.304 "is_configured": true, 00:22:38.304 "data_offset": 256, 00:22:38.304 "data_size": 7936 00:22:38.304 } 00:22:38.304 ] 00:22:38.304 }' 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:38.304 "name": "raid_bdev1", 00:22:38.304 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:38.304 "strip_size_kb": 0, 00:22:38.304 "state": "online", 00:22:38.304 "raid_level": "raid1", 00:22:38.304 "superblock": true, 00:22:38.304 "num_base_bdevs": 2, 00:22:38.304 "num_base_bdevs_discovered": 2, 00:22:38.304 "num_base_bdevs_operational": 2, 00:22:38.304 "base_bdevs_list": [ 00:22:38.304 { 00:22:38.304 "name": "spare", 00:22:38.304 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:38.304 "is_configured": true, 00:22:38.304 "data_offset": 256, 00:22:38.304 "data_size": 7936 00:22:38.304 }, 00:22:38.304 { 00:22:38.304 "name": "BaseBdev2", 00:22:38.304 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:38.304 "is_configured": true, 00:22:38.304 "data_offset": 256, 00:22:38.304 "data_size": 7936 00:22:38.304 } 00:22:38.304 ] 00:22:38.304 }' 00:22:38.304 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.564 "name": "raid_bdev1", 00:22:38.564 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:38.564 "strip_size_kb": 0, 00:22:38.564 "state": "online", 00:22:38.564 "raid_level": "raid1", 00:22:38.564 "superblock": true, 00:22:38.564 "num_base_bdevs": 2, 00:22:38.564 "num_base_bdevs_discovered": 2, 00:22:38.564 "num_base_bdevs_operational": 2, 00:22:38.564 "base_bdevs_list": [ 00:22:38.564 { 00:22:38.564 "name": "spare", 00:22:38.564 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:38.564 "is_configured": true, 00:22:38.564 "data_offset": 256, 00:22:38.564 "data_size": 7936 00:22:38.564 }, 00:22:38.564 { 00:22:38.564 "name": "BaseBdev2", 00:22:38.564 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:38.564 "is_configured": true, 00:22:38.564 "data_offset": 256, 00:22:38.564 "data_size": 7936 00:22:38.564 } 00:22:38.564 ] 00:22:38.564 }' 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.564 10:50:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.133 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:39.133 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.133 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.133 [2024-10-30 10:51:00.339279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:39.133 [2024-10-30 10:51:00.339465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:39.133 [2024-10-30 10:51:00.339675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.134 [2024-10-30 10:51:00.339891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.134 [2024-10-30 10:51:00.340073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:39.134 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:39.393 /dev/nbd0 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:39.393 1+0 records in 00:22:39.393 1+0 records out 00:22:39.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304026 s, 13.5 MB/s 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:39.393 10:51:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:39.652 /dev/nbd1 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:39.911 1+0 records in 00:22:39.911 1+0 records out 00:22:39.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044316 s, 9.2 MB/s 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:39.911 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:40.478 10:51:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.738 [2024-10-30 10:51:02.093436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:40.738 [2024-10-30 10:51:02.093500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.738 [2024-10-30 10:51:02.093536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:40.738 [2024-10-30 10:51:02.093551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.738 [2024-10-30 10:51:02.096752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.738 [2024-10-30 10:51:02.096798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:40.738 [2024-10-30 10:51:02.096921] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:40.738 [2024-10-30 10:51:02.097012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:40.738 [2024-10-30 10:51:02.097213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.738 spare 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.738 [2024-10-30 10:51:02.197336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:40.738 [2024-10-30 10:51:02.197378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:40.738 [2024-10-30 10:51:02.197807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:40.738 [2024-10-30 10:51:02.198095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:40.738 [2024-10-30 10:51:02.198118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:40.738 [2024-10-30 10:51:02.198368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.738 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.997 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.997 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.997 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:40.997 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.997 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.997 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.997 "name": "raid_bdev1", 00:22:40.997 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:40.997 "strip_size_kb": 0, 00:22:40.997 "state": "online", 00:22:40.997 "raid_level": "raid1", 00:22:40.997 "superblock": true, 00:22:40.997 "num_base_bdevs": 2, 00:22:40.997 "num_base_bdevs_discovered": 2, 00:22:40.997 "num_base_bdevs_operational": 2, 00:22:40.997 "base_bdevs_list": [ 00:22:40.997 { 00:22:40.997 "name": "spare", 00:22:40.997 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:40.997 "is_configured": true, 00:22:40.997 "data_offset": 256, 00:22:40.997 "data_size": 7936 00:22:40.997 }, 00:22:40.997 { 00:22:40.997 "name": "BaseBdev2", 00:22:40.997 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:40.997 "is_configured": true, 00:22:40.997 "data_offset": 256, 00:22:40.997 "data_size": 7936 00:22:40.997 } 00:22:40.997 ] 00:22:40.997 }' 00:22:40.997 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.997 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:41.567 "name": "raid_bdev1", 00:22:41.567 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:41.567 "strip_size_kb": 0, 00:22:41.567 "state": "online", 00:22:41.567 "raid_level": "raid1", 00:22:41.567 "superblock": true, 00:22:41.567 "num_base_bdevs": 2, 00:22:41.567 "num_base_bdevs_discovered": 2, 00:22:41.567 "num_base_bdevs_operational": 2, 00:22:41.567 "base_bdevs_list": [ 00:22:41.567 { 00:22:41.567 "name": "spare", 00:22:41.567 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:41.567 "is_configured": true, 00:22:41.567 "data_offset": 256, 00:22:41.567 "data_size": 7936 00:22:41.567 }, 00:22:41.567 { 00:22:41.567 "name": "BaseBdev2", 00:22:41.567 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:41.567 "is_configured": true, 00:22:41.567 "data_offset": 256, 00:22:41.567 "data_size": 7936 00:22:41.567 } 00:22:41.567 ] 00:22:41.567 }' 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.567 [2024-10-30 10:51:02.970548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:41.567 10:51:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.827 10:51:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.827 "name": "raid_bdev1", 00:22:41.827 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:41.827 "strip_size_kb": 0, 00:22:41.827 "state": "online", 00:22:41.827 "raid_level": "raid1", 00:22:41.827 "superblock": true, 00:22:41.827 "num_base_bdevs": 2, 00:22:41.827 "num_base_bdevs_discovered": 1, 00:22:41.827 "num_base_bdevs_operational": 1, 00:22:41.827 "base_bdevs_list": [ 00:22:41.827 { 00:22:41.827 "name": null, 00:22:41.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.827 "is_configured": false, 00:22:41.827 "data_offset": 0, 00:22:41.827 "data_size": 7936 00:22:41.827 }, 00:22:41.827 { 00:22:41.827 "name": "BaseBdev2", 00:22:41.827 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:41.827 "is_configured": true, 00:22:41.827 "data_offset": 256, 00:22:41.827 "data_size": 7936 00:22:41.827 } 00:22:41.827 ] 00:22:41.827 }' 00:22:41.827 10:51:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.827 10:51:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.086 10:51:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:42.086 10:51:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.086 10:51:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:42.086 [2024-10-30 10:51:03.498847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.086 [2024-10-30 10:51:03.499519] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:42.086 [2024-10-30 10:51:03.499745] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:42.086 [2024-10-30 10:51:03.500002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.086 [2024-10-30 10:51:03.516532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:22:42.086 10:51:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.086 10:51:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:42.086 [2024-10-30 10:51:03.519257] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.466 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:43.466 "name": "raid_bdev1", 00:22:43.467 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:43.467 "strip_size_kb": 0, 00:22:43.467 "state": "online", 00:22:43.467 "raid_level": "raid1", 00:22:43.467 "superblock": true, 00:22:43.467 "num_base_bdevs": 2, 00:22:43.467 "num_base_bdevs_discovered": 2, 00:22:43.467 "num_base_bdevs_operational": 2, 00:22:43.467 "process": { 00:22:43.467 "type": "rebuild", 00:22:43.467 "target": "spare", 00:22:43.467 "progress": { 00:22:43.467 "blocks": 2560, 00:22:43.467 "percent": 32 00:22:43.467 } 00:22:43.467 }, 00:22:43.467 "base_bdevs_list": [ 00:22:43.467 { 00:22:43.467 "name": "spare", 00:22:43.467 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:43.467 "is_configured": true, 00:22:43.467 "data_offset": 256, 00:22:43.467 "data_size": 7936 00:22:43.467 }, 00:22:43.467 { 00:22:43.467 "name": "BaseBdev2", 00:22:43.467 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:43.467 "is_configured": true, 00:22:43.467 "data_offset": 256, 00:22:43.467 "data_size": 7936 00:22:43.467 } 00:22:43.467 ] 00:22:43.467 }' 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.467 [2024-10-30 10:51:04.684335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:43.467 [2024-10-30 10:51:04.728065] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:43.467 [2024-10-30 10:51:04.728210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.467 [2024-10-30 10:51:04.728251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:43.467 [2024-10-30 10:51:04.728266] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.467 "name": "raid_bdev1", 00:22:43.467 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:43.467 "strip_size_kb": 0, 00:22:43.467 "state": "online", 00:22:43.467 "raid_level": "raid1", 00:22:43.467 "superblock": true, 00:22:43.467 "num_base_bdevs": 2, 00:22:43.467 "num_base_bdevs_discovered": 1, 00:22:43.467 "num_base_bdevs_operational": 1, 00:22:43.467 "base_bdevs_list": [ 00:22:43.467 { 00:22:43.467 "name": null, 00:22:43.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.467 "is_configured": false, 00:22:43.467 "data_offset": 0, 00:22:43.467 "data_size": 7936 00:22:43.467 }, 00:22:43.467 { 00:22:43.467 "name": "BaseBdev2", 00:22:43.467 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:43.467 "is_configured": true, 00:22:43.467 "data_offset": 256, 00:22:43.467 "data_size": 7936 00:22:43.467 } 00:22:43.467 ] 00:22:43.467 }' 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.467 10:51:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.037 10:51:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:44.037 10:51:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.037 10:51:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.037 [2024-10-30 10:51:05.283199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:44.037 [2024-10-30 10:51:05.283410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.037 [2024-10-30 10:51:05.283484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:44.037 [2024-10-30 10:51:05.283779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.037 [2024-10-30 10:51:05.284466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.037 [2024-10-30 10:51:05.284528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:44.037 [2024-10-30 10:51:05.284647] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:44.037 [2024-10-30 10:51:05.284672] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:44.037 [2024-10-30 10:51:05.284687] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:44.037 [2024-10-30 10:51:05.284725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:44.037 [2024-10-30 10:51:05.301291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:44.037 spare 00:22:44.037 10:51:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.037 10:51:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:44.037 [2024-10-30 10:51:05.303850] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:44.975 "name": "raid_bdev1", 00:22:44.975 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:44.975 "strip_size_kb": 0, 00:22:44.975 "state": "online", 00:22:44.975 "raid_level": "raid1", 00:22:44.975 "superblock": true, 00:22:44.975 "num_base_bdevs": 2, 00:22:44.975 "num_base_bdevs_discovered": 2, 00:22:44.975 "num_base_bdevs_operational": 2, 00:22:44.975 "process": { 00:22:44.975 "type": "rebuild", 00:22:44.975 "target": "spare", 00:22:44.975 "progress": { 00:22:44.975 "blocks": 2560, 00:22:44.975 "percent": 32 00:22:44.975 } 00:22:44.975 }, 00:22:44.975 "base_bdevs_list": [ 00:22:44.975 { 00:22:44.975 "name": "spare", 00:22:44.975 "uuid": "631412e5-c215-5c09-9960-fa45691c058b", 00:22:44.975 "is_configured": true, 00:22:44.975 "data_offset": 256, 00:22:44.975 "data_size": 7936 00:22:44.975 }, 00:22:44.975 { 00:22:44.975 "name": "BaseBdev2", 00:22:44.975 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:44.975 "is_configured": true, 00:22:44.975 "data_offset": 256, 00:22:44.975 "data_size": 7936 00:22:44.975 } 00:22:44.975 ] 00:22:44.975 }' 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.975 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.235 [2024-10-30 10:51:06.473955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:45.235 [2024-10-30 10:51:06.512995] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:45.235 [2024-10-30 10:51:06.513089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.235 [2024-10-30 10:51:06.513118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:45.235 [2024-10-30 10:51:06.513130] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.235 "name": "raid_bdev1", 00:22:45.235 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:45.235 "strip_size_kb": 0, 00:22:45.235 "state": "online", 00:22:45.235 "raid_level": "raid1", 00:22:45.235 "superblock": true, 00:22:45.235 "num_base_bdevs": 2, 00:22:45.235 "num_base_bdevs_discovered": 1, 00:22:45.235 "num_base_bdevs_operational": 1, 00:22:45.235 "base_bdevs_list": [ 00:22:45.235 { 00:22:45.235 "name": null, 00:22:45.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.235 "is_configured": false, 00:22:45.235 "data_offset": 0, 00:22:45.235 "data_size": 7936 00:22:45.235 }, 00:22:45.235 { 00:22:45.235 "name": "BaseBdev2", 00:22:45.235 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:45.235 "is_configured": true, 00:22:45.235 "data_offset": 256, 00:22:45.235 "data_size": 7936 00:22:45.235 } 00:22:45.235 ] 00:22:45.235 }' 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.235 10:51:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:45.803 "name": "raid_bdev1", 00:22:45.803 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:45.803 "strip_size_kb": 0, 00:22:45.803 "state": "online", 00:22:45.803 "raid_level": "raid1", 00:22:45.803 "superblock": true, 00:22:45.803 "num_base_bdevs": 2, 00:22:45.803 "num_base_bdevs_discovered": 1, 00:22:45.803 "num_base_bdevs_operational": 1, 00:22:45.803 "base_bdevs_list": [ 00:22:45.803 { 00:22:45.803 "name": null, 00:22:45.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.803 "is_configured": false, 00:22:45.803 "data_offset": 0, 00:22:45.803 "data_size": 7936 00:22:45.803 }, 00:22:45.803 { 00:22:45.803 "name": "BaseBdev2", 00:22:45.803 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:45.803 "is_configured": true, 00:22:45.803 "data_offset": 256, 00:22:45.803 "data_size": 7936 00:22:45.803 } 00:22:45.803 ] 00:22:45.803 }' 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:45.803 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:45.804 [2024-10-30 10:51:07.263521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:45.804 [2024-10-30 10:51:07.263739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.804 [2024-10-30 10:51:07.263786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:45.804 [2024-10-30 10:51:07.263813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.804 [2024-10-30 10:51:07.264388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.804 [2024-10-30 10:51:07.264424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:45.804 [2024-10-30 10:51:07.264574] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:45.804 [2024-10-30 10:51:07.264596] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:45.804 [2024-10-30 10:51:07.264610] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:45.804 [2024-10-30 10:51:07.264622] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:45.804 BaseBdev1 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.804 10:51:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.181 "name": "raid_bdev1", 00:22:47.181 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:47.181 "strip_size_kb": 0, 00:22:47.181 "state": "online", 00:22:47.181 "raid_level": "raid1", 00:22:47.181 "superblock": true, 00:22:47.181 "num_base_bdevs": 2, 00:22:47.181 "num_base_bdevs_discovered": 1, 00:22:47.181 "num_base_bdevs_operational": 1, 00:22:47.181 "base_bdevs_list": [ 00:22:47.181 { 00:22:47.181 "name": null, 00:22:47.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.181 "is_configured": false, 00:22:47.181 "data_offset": 0, 00:22:47.181 "data_size": 7936 00:22:47.181 }, 00:22:47.181 { 00:22:47.181 "name": "BaseBdev2", 00:22:47.181 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:47.181 "is_configured": true, 00:22:47.181 "data_offset": 256, 00:22:47.181 "data_size": 7936 00:22:47.181 } 00:22:47.181 ] 00:22:47.181 }' 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.181 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:47.440 "name": "raid_bdev1", 00:22:47.440 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:47.440 "strip_size_kb": 0, 00:22:47.440 "state": "online", 00:22:47.440 "raid_level": "raid1", 00:22:47.440 "superblock": true, 00:22:47.440 "num_base_bdevs": 2, 00:22:47.440 "num_base_bdevs_discovered": 1, 00:22:47.440 "num_base_bdevs_operational": 1, 00:22:47.440 "base_bdevs_list": [ 00:22:47.440 { 00:22:47.440 "name": null, 00:22:47.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.440 "is_configured": false, 00:22:47.440 "data_offset": 0, 00:22:47.440 "data_size": 7936 00:22:47.440 }, 00:22:47.440 { 00:22:47.440 "name": "BaseBdev2", 00:22:47.440 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:47.440 "is_configured": true, 00:22:47.440 "data_offset": 256, 00:22:47.440 "data_size": 7936 00:22:47.440 } 00:22:47.440 ] 00:22:47.440 }' 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:47.440 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:47.699 [2024-10-30 10:51:08.972325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:47.699 [2024-10-30 10:51:08.972602] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:47.699 [2024-10-30 10:51:08.972625] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:47.699 request: 00:22:47.699 { 00:22:47.699 "base_bdev": "BaseBdev1", 00:22:47.699 "raid_bdev": "raid_bdev1", 00:22:47.699 "method": "bdev_raid_add_base_bdev", 00:22:47.699 "req_id": 1 00:22:47.699 } 00:22:47.699 Got JSON-RPC error response 00:22:47.699 response: 00:22:47.699 { 00:22:47.699 "code": -22, 00:22:47.699 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:47.699 } 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:47.699 10:51:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.635 10:51:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:48.635 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.635 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.635 "name": "raid_bdev1", 00:22:48.635 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:48.635 "strip_size_kb": 0, 00:22:48.635 "state": "online", 00:22:48.635 "raid_level": "raid1", 00:22:48.635 "superblock": true, 00:22:48.635 "num_base_bdevs": 2, 00:22:48.635 "num_base_bdevs_discovered": 1, 00:22:48.635 "num_base_bdevs_operational": 1, 00:22:48.635 "base_bdevs_list": [ 00:22:48.635 { 00:22:48.635 "name": null, 00:22:48.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.635 "is_configured": false, 00:22:48.635 "data_offset": 0, 00:22:48.635 "data_size": 7936 00:22:48.635 }, 00:22:48.635 { 00:22:48.635 "name": "BaseBdev2", 00:22:48.635 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:48.635 "is_configured": true, 00:22:48.635 "data_offset": 256, 00:22:48.635 "data_size": 7936 00:22:48.635 } 00:22:48.635 ] 00:22:48.635 }' 00:22:48.635 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.635 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:49.246 "name": "raid_bdev1", 00:22:49.246 "uuid": "eddf2f17-0012-4a1a-8213-b3f827517153", 00:22:49.246 "strip_size_kb": 0, 00:22:49.246 "state": "online", 00:22:49.246 "raid_level": "raid1", 00:22:49.246 "superblock": true, 00:22:49.246 "num_base_bdevs": 2, 00:22:49.246 "num_base_bdevs_discovered": 1, 00:22:49.246 "num_base_bdevs_operational": 1, 00:22:49.246 "base_bdevs_list": [ 00:22:49.246 { 00:22:49.246 "name": null, 00:22:49.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.246 "is_configured": false, 00:22:49.246 "data_offset": 0, 00:22:49.246 "data_size": 7936 00:22:49.246 }, 00:22:49.246 { 00:22:49.246 "name": "BaseBdev2", 00:22:49.246 "uuid": "872cda92-5ba6-5f8c-8637-d83752f326df", 00:22:49.246 "is_configured": true, 00:22:49.246 "data_offset": 256, 00:22:49.246 "data_size": 7936 00:22:49.246 } 00:22:49.246 ] 00:22:49.246 }' 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87082 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 87082 ']' 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 87082 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87082 00:22:49.246 killing process with pid 87082 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87082' 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 87082 00:22:49.246 Received shutdown signal, test time was about 60.000000 seconds 00:22:49.246 00:22:49.246 Latency(us) 00:22:49.246 [2024-10-30T10:51:10.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.246 [2024-10-30T10:51:10.716Z] =================================================================================================================== 00:22:49.246 [2024-10-30T10:51:10.716Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.246 10:51:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 87082 00:22:49.246 [2024-10-30 10:51:10.700915] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:49.246 [2024-10-30 10:51:10.701117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.246 [2024-10-30 10:51:10.701187] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:49.246 [2024-10-30 10:51:10.701214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:49.812 [2024-10-30 10:51:10.980614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:50.750 10:51:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:22:50.750 00:22:50.750 real 0m22.047s 00:22:50.750 user 0m30.018s 00:22:50.750 sys 0m2.574s 00:22:50.750 ************************************ 00:22:50.750 END TEST raid_rebuild_test_sb_4k 00:22:50.750 ************************************ 00:22:50.750 10:51:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:50.750 10:51:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:50.750 10:51:12 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:22:50.750 10:51:12 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:50.750 10:51:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:50.750 10:51:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:50.750 10:51:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:50.750 ************************************ 00:22:50.750 START TEST raid_state_function_test_sb_md_separate 00:22:50.750 ************************************ 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:50.750 Process raid pid: 87791 00:22:50.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87791 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87791' 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87791 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87791 ']' 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:50.750 10:51:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.010 [2024-10-30 10:51:12.260565] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:22:51.010 [2024-10-30 10:51:12.261034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.010 [2024-10-30 10:51:12.445905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.270 [2024-10-30 10:51:12.577970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.529 [2024-10-30 10:51:12.795845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.529 [2024-10-30 10:51:12.796068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.789 [2024-10-30 10:51:13.215783] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:51.789 [2024-10-30 10:51:13.215850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:51.789 [2024-10-30 10:51:13.215868] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:51.789 [2024-10-30 10:51:13.215884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:51.789 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.049 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.049 "name": "Existed_Raid", 00:22:52.049 "uuid": "657584ea-0e8d-41f9-9b5b-928c3b6d8df5", 00:22:52.049 "strip_size_kb": 0, 00:22:52.049 "state": "configuring", 00:22:52.049 "raid_level": "raid1", 00:22:52.049 "superblock": true, 00:22:52.049 "num_base_bdevs": 2, 00:22:52.049 "num_base_bdevs_discovered": 0, 00:22:52.049 "num_base_bdevs_operational": 2, 00:22:52.049 "base_bdevs_list": [ 00:22:52.049 { 00:22:52.049 "name": "BaseBdev1", 00:22:52.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.049 "is_configured": false, 00:22:52.049 "data_offset": 0, 00:22:52.049 "data_size": 0 00:22:52.049 }, 00:22:52.049 { 00:22:52.049 "name": "BaseBdev2", 00:22:52.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.049 "is_configured": false, 00:22:52.049 "data_offset": 0, 00:22:52.049 "data_size": 0 00:22:52.049 } 00:22:52.049 ] 00:22:52.049 }' 00:22:52.049 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.049 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.308 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:52.308 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.308 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.308 [2024-10-30 10:51:13.735933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:52.309 [2024-10-30 10:51:13.735975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:52.309 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.309 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:52.309 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.309 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.309 [2024-10-30 10:51:13.743847] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:52.309 [2024-10-30 10:51:13.743919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:52.309 [2024-10-30 10:51:13.743949] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:52.309 [2024-10-30 10:51:13.743980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:52.309 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.309 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:52.309 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.309 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.568 [2024-10-30 10:51:13.792824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:52.568 BaseBdev1 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.568 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.568 [ 00:22:52.568 { 00:22:52.568 "name": "BaseBdev1", 00:22:52.568 "aliases": [ 00:22:52.568 "913115bb-f878-40da-a047-e964d6b9b195" 00:22:52.568 ], 00:22:52.568 "product_name": "Malloc disk", 00:22:52.568 "block_size": 4096, 00:22:52.568 "num_blocks": 8192, 00:22:52.568 "uuid": "913115bb-f878-40da-a047-e964d6b9b195", 00:22:52.568 "md_size": 32, 00:22:52.568 "md_interleave": false, 00:22:52.568 "dif_type": 0, 00:22:52.568 "assigned_rate_limits": { 00:22:52.568 "rw_ios_per_sec": 0, 00:22:52.568 "rw_mbytes_per_sec": 0, 00:22:52.568 "r_mbytes_per_sec": 0, 00:22:52.568 "w_mbytes_per_sec": 0 00:22:52.568 }, 00:22:52.568 "claimed": true, 00:22:52.568 "claim_type": "exclusive_write", 00:22:52.568 "zoned": false, 00:22:52.568 "supported_io_types": { 00:22:52.568 "read": true, 00:22:52.568 "write": true, 00:22:52.568 "unmap": true, 00:22:52.568 "flush": true, 00:22:52.568 "reset": true, 00:22:52.568 "nvme_admin": false, 00:22:52.568 "nvme_io": false, 00:22:52.568 "nvme_io_md": false, 00:22:52.568 "write_zeroes": true, 00:22:52.568 "zcopy": true, 00:22:52.568 "get_zone_info": false, 00:22:52.568 "zone_management": false, 00:22:52.568 "zone_append": false, 00:22:52.568 "compare": false, 00:22:52.568 "compare_and_write": false, 00:22:52.568 "abort": true, 00:22:52.568 "seek_hole": false, 00:22:52.568 "seek_data": false, 00:22:52.568 "copy": true, 00:22:52.569 "nvme_iov_md": false 00:22:52.569 }, 00:22:52.569 "memory_domains": [ 00:22:52.569 { 00:22:52.569 "dma_device_id": "system", 00:22:52.569 "dma_device_type": 1 00:22:52.569 }, 00:22:52.569 { 00:22:52.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.569 "dma_device_type": 2 00:22:52.569 } 00:22:52.569 ], 00:22:52.569 "driver_specific": {} 00:22:52.569 } 00:22:52.569 ] 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.569 "name": "Existed_Raid", 00:22:52.569 "uuid": "61bd7ccf-f0ba-403b-bfce-dacc99532651", 00:22:52.569 "strip_size_kb": 0, 00:22:52.569 "state": "configuring", 00:22:52.569 "raid_level": "raid1", 00:22:52.569 "superblock": true, 00:22:52.569 "num_base_bdevs": 2, 00:22:52.569 "num_base_bdevs_discovered": 1, 00:22:52.569 "num_base_bdevs_operational": 2, 00:22:52.569 "base_bdevs_list": [ 00:22:52.569 { 00:22:52.569 "name": "BaseBdev1", 00:22:52.569 "uuid": "913115bb-f878-40da-a047-e964d6b9b195", 00:22:52.569 "is_configured": true, 00:22:52.569 "data_offset": 256, 00:22:52.569 "data_size": 7936 00:22:52.569 }, 00:22:52.569 { 00:22:52.569 "name": "BaseBdev2", 00:22:52.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.569 "is_configured": false, 00:22:52.569 "data_offset": 0, 00:22:52.569 "data_size": 0 00:22:52.569 } 00:22:52.569 ] 00:22:52.569 }' 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.569 10:51:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.136 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:53.136 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.136 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.136 [2024-10-30 10:51:14.385159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:53.136 [2024-10-30 10:51:14.385221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.137 [2024-10-30 10:51:14.397194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:53.137 [2024-10-30 10:51:14.400102] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:53.137 [2024-10-30 10:51:14.400280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.137 "name": "Existed_Raid", 00:22:53.137 "uuid": "c1bb1893-edb4-441e-bca4-ad564eb8e82f", 00:22:53.137 "strip_size_kb": 0, 00:22:53.137 "state": "configuring", 00:22:53.137 "raid_level": "raid1", 00:22:53.137 "superblock": true, 00:22:53.137 "num_base_bdevs": 2, 00:22:53.137 "num_base_bdevs_discovered": 1, 00:22:53.137 "num_base_bdevs_operational": 2, 00:22:53.137 "base_bdevs_list": [ 00:22:53.137 { 00:22:53.137 "name": "BaseBdev1", 00:22:53.137 "uuid": "913115bb-f878-40da-a047-e964d6b9b195", 00:22:53.137 "is_configured": true, 00:22:53.137 "data_offset": 256, 00:22:53.137 "data_size": 7936 00:22:53.137 }, 00:22:53.137 { 00:22:53.137 "name": "BaseBdev2", 00:22:53.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.137 "is_configured": false, 00:22:53.137 "data_offset": 0, 00:22:53.137 "data_size": 0 00:22:53.137 } 00:22:53.137 ] 00:22:53.137 }' 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.137 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.706 [2024-10-30 10:51:14.962216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:53.706 [2024-10-30 10:51:14.962790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:53.706 [2024-10-30 10:51:14.962817] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:53.706 [2024-10-30 10:51:14.962918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:53.706 BaseBdev2 00:22:53.706 [2024-10-30 10:51:14.963136] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:53.706 [2024-10-30 10:51:14.963155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:53.706 [2024-10-30 10:51:14.963287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.706 [ 00:22:53.706 { 00:22:53.706 "name": "BaseBdev2", 00:22:53.706 "aliases": [ 00:22:53.706 "65f2e6ce-9873-4467-9edd-5ebc3b00a505" 00:22:53.706 ], 00:22:53.706 "product_name": "Malloc disk", 00:22:53.706 "block_size": 4096, 00:22:53.706 "num_blocks": 8192, 00:22:53.706 "uuid": "65f2e6ce-9873-4467-9edd-5ebc3b00a505", 00:22:53.706 "md_size": 32, 00:22:53.706 "md_interleave": false, 00:22:53.706 "dif_type": 0, 00:22:53.706 "assigned_rate_limits": { 00:22:53.706 "rw_ios_per_sec": 0, 00:22:53.706 "rw_mbytes_per_sec": 0, 00:22:53.706 "r_mbytes_per_sec": 0, 00:22:53.706 "w_mbytes_per_sec": 0 00:22:53.706 }, 00:22:53.706 "claimed": true, 00:22:53.706 "claim_type": "exclusive_write", 00:22:53.706 "zoned": false, 00:22:53.706 "supported_io_types": { 00:22:53.706 "read": true, 00:22:53.706 "write": true, 00:22:53.706 "unmap": true, 00:22:53.706 "flush": true, 00:22:53.706 "reset": true, 00:22:53.706 "nvme_admin": false, 00:22:53.706 "nvme_io": false, 00:22:53.706 "nvme_io_md": false, 00:22:53.706 "write_zeroes": true, 00:22:53.706 "zcopy": true, 00:22:53.706 "get_zone_info": false, 00:22:53.706 "zone_management": false, 00:22:53.706 "zone_append": false, 00:22:53.706 "compare": false, 00:22:53.706 "compare_and_write": false, 00:22:53.706 "abort": true, 00:22:53.706 "seek_hole": false, 00:22:53.706 "seek_data": false, 00:22:53.706 "copy": true, 00:22:53.706 "nvme_iov_md": false 00:22:53.706 }, 00:22:53.706 "memory_domains": [ 00:22:53.706 { 00:22:53.706 "dma_device_id": "system", 00:22:53.706 "dma_device_type": 1 00:22:53.706 }, 00:22:53.706 { 00:22:53.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.706 "dma_device_type": 2 00:22:53.706 } 00:22:53.706 ], 00:22:53.706 "driver_specific": {} 00:22:53.706 } 00:22:53.706 ] 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:53.706 10:51:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:53.706 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.706 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.706 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.706 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:53.706 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.706 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:53.706 "name": "Existed_Raid", 00:22:53.706 "uuid": "c1bb1893-edb4-441e-bca4-ad564eb8e82f", 00:22:53.706 "strip_size_kb": 0, 00:22:53.706 "state": "online", 00:22:53.706 "raid_level": "raid1", 00:22:53.706 "superblock": true, 00:22:53.706 "num_base_bdevs": 2, 00:22:53.706 "num_base_bdevs_discovered": 2, 00:22:53.706 "num_base_bdevs_operational": 2, 00:22:53.706 "base_bdevs_list": [ 00:22:53.706 { 00:22:53.706 "name": "BaseBdev1", 00:22:53.706 "uuid": "913115bb-f878-40da-a047-e964d6b9b195", 00:22:53.706 "is_configured": true, 00:22:53.706 "data_offset": 256, 00:22:53.706 "data_size": 7936 00:22:53.706 }, 00:22:53.706 { 00:22:53.706 "name": "BaseBdev2", 00:22:53.706 "uuid": "65f2e6ce-9873-4467-9edd-5ebc3b00a505", 00:22:53.706 "is_configured": true, 00:22:53.706 "data_offset": 256, 00:22:53.706 "data_size": 7936 00:22:53.706 } 00:22:53.706 ] 00:22:53.706 }' 00:22:53.706 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:53.706 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.277 [2024-10-30 10:51:15.558844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:54.277 "name": "Existed_Raid", 00:22:54.277 "aliases": [ 00:22:54.277 "c1bb1893-edb4-441e-bca4-ad564eb8e82f" 00:22:54.277 ], 00:22:54.277 "product_name": "Raid Volume", 00:22:54.277 "block_size": 4096, 00:22:54.277 "num_blocks": 7936, 00:22:54.277 "uuid": "c1bb1893-edb4-441e-bca4-ad564eb8e82f", 00:22:54.277 "md_size": 32, 00:22:54.277 "md_interleave": false, 00:22:54.277 "dif_type": 0, 00:22:54.277 "assigned_rate_limits": { 00:22:54.277 "rw_ios_per_sec": 0, 00:22:54.277 "rw_mbytes_per_sec": 0, 00:22:54.277 "r_mbytes_per_sec": 0, 00:22:54.277 "w_mbytes_per_sec": 0 00:22:54.277 }, 00:22:54.277 "claimed": false, 00:22:54.277 "zoned": false, 00:22:54.277 "supported_io_types": { 00:22:54.277 "read": true, 00:22:54.277 "write": true, 00:22:54.277 "unmap": false, 00:22:54.277 "flush": false, 00:22:54.277 "reset": true, 00:22:54.277 "nvme_admin": false, 00:22:54.277 "nvme_io": false, 00:22:54.277 "nvme_io_md": false, 00:22:54.277 "write_zeroes": true, 00:22:54.277 "zcopy": false, 00:22:54.277 "get_zone_info": false, 00:22:54.277 "zone_management": false, 00:22:54.277 "zone_append": false, 00:22:54.277 "compare": false, 00:22:54.277 "compare_and_write": false, 00:22:54.277 "abort": false, 00:22:54.277 "seek_hole": false, 00:22:54.277 "seek_data": false, 00:22:54.277 "copy": false, 00:22:54.277 "nvme_iov_md": false 00:22:54.277 }, 00:22:54.277 "memory_domains": [ 00:22:54.277 { 00:22:54.277 "dma_device_id": "system", 00:22:54.277 "dma_device_type": 1 00:22:54.277 }, 00:22:54.277 { 00:22:54.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.277 "dma_device_type": 2 00:22:54.277 }, 00:22:54.277 { 00:22:54.277 "dma_device_id": "system", 00:22:54.277 "dma_device_type": 1 00:22:54.277 }, 00:22:54.277 { 00:22:54.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.277 "dma_device_type": 2 00:22:54.277 } 00:22:54.277 ], 00:22:54.277 "driver_specific": { 00:22:54.277 "raid": { 00:22:54.277 "uuid": "c1bb1893-edb4-441e-bca4-ad564eb8e82f", 00:22:54.277 "strip_size_kb": 0, 00:22:54.277 "state": "online", 00:22:54.277 "raid_level": "raid1", 00:22:54.277 "superblock": true, 00:22:54.277 "num_base_bdevs": 2, 00:22:54.277 "num_base_bdevs_discovered": 2, 00:22:54.277 "num_base_bdevs_operational": 2, 00:22:54.277 "base_bdevs_list": [ 00:22:54.277 { 00:22:54.277 "name": "BaseBdev1", 00:22:54.277 "uuid": "913115bb-f878-40da-a047-e964d6b9b195", 00:22:54.277 "is_configured": true, 00:22:54.277 "data_offset": 256, 00:22:54.277 "data_size": 7936 00:22:54.277 }, 00:22:54.277 { 00:22:54.277 "name": "BaseBdev2", 00:22:54.277 "uuid": "65f2e6ce-9873-4467-9edd-5ebc3b00a505", 00:22:54.277 "is_configured": true, 00:22:54.277 "data_offset": 256, 00:22:54.277 "data_size": 7936 00:22:54.277 } 00:22:54.277 ] 00:22:54.277 } 00:22:54.277 } 00:22:54.277 }' 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:54.277 BaseBdev2' 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.277 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.537 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:54.537 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.538 [2024-10-30 10:51:15.818598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.538 "name": "Existed_Raid", 00:22:54.538 "uuid": "c1bb1893-edb4-441e-bca4-ad564eb8e82f", 00:22:54.538 "strip_size_kb": 0, 00:22:54.538 "state": "online", 00:22:54.538 "raid_level": "raid1", 00:22:54.538 "superblock": true, 00:22:54.538 "num_base_bdevs": 2, 00:22:54.538 "num_base_bdevs_discovered": 1, 00:22:54.538 "num_base_bdevs_operational": 1, 00:22:54.538 "base_bdevs_list": [ 00:22:54.538 { 00:22:54.538 "name": null, 00:22:54.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.538 "is_configured": false, 00:22:54.538 "data_offset": 0, 00:22:54.538 "data_size": 7936 00:22:54.538 }, 00:22:54.538 { 00:22:54.538 "name": "BaseBdev2", 00:22:54.538 "uuid": "65f2e6ce-9873-4467-9edd-5ebc3b00a505", 00:22:54.538 "is_configured": true, 00:22:54.538 "data_offset": 256, 00:22:54.538 "data_size": 7936 00:22:54.538 } 00:22:54.538 ] 00:22:54.538 }' 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.538 10:51:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.107 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.107 [2024-10-30 10:51:16.518818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:55.107 [2024-10-30 10:51:16.519005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.366 [2024-10-30 10:51:16.622060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.366 [2024-10-30 10:51:16.622137] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.366 [2024-10-30 10:51:16.622159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87791 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87791 ']' 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87791 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87791 00:22:55.366 killing process with pid 87791 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87791' 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87791 00:22:55.366 10:51:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87791 00:22:55.366 [2024-10-30 10:51:16.712322] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:55.366 [2024-10-30 10:51:16.728168] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:56.745 10:51:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:22:56.745 00:22:56.745 real 0m5.737s 00:22:56.745 user 0m8.628s 00:22:56.745 sys 0m0.811s 00:22:56.745 10:51:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:56.745 10:51:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:56.745 ************************************ 00:22:56.745 END TEST raid_state_function_test_sb_md_separate 00:22:56.745 ************************************ 00:22:56.745 10:51:17 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:56.745 10:51:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:56.745 10:51:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:56.745 10:51:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:56.745 ************************************ 00:22:56.745 START TEST raid_superblock_test_md_separate 00:22:56.745 ************************************ 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88049 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88049 00:22:56.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88049 ']' 00:22:56.745 10:51:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.746 10:51:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:56.746 10:51:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.746 10:51:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:56.746 10:51:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:56.746 [2024-10-30 10:51:18.034512] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:22:56.746 [2024-10-30 10:51:18.034687] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88049 ] 00:22:56.746 [2024-10-30 10:51:18.212682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.005 [2024-10-30 10:51:18.343725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.264 [2024-10-30 10:51:18.552578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:57.264 [2024-10-30 10:51:18.552623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.832 malloc1 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.832 [2024-10-30 10:51:19.103615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:57.832 [2024-10-30 10:51:19.103692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.832 [2024-10-30 10:51:19.103724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:57.832 [2024-10-30 10:51:19.103739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.832 [2024-10-30 10:51:19.106379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.832 [2024-10-30 10:51:19.106424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:57.832 pt1 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.832 malloc2 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.832 [2024-10-30 10:51:19.161075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:57.832 [2024-10-30 10:51:19.161312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.832 [2024-10-30 10:51:19.161392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:57.832 [2024-10-30 10:51:19.161599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.832 [2024-10-30 10:51:19.164245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.832 [2024-10-30 10:51:19.164466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:57.832 pt2 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.832 [2024-10-30 10:51:19.173214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:57.832 [2024-10-30 10:51:19.175937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:57.832 [2024-10-30 10:51:19.176370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:57.832 [2024-10-30 10:51:19.176399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:57.832 [2024-10-30 10:51:19.176503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:57.832 [2024-10-30 10:51:19.176671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:57.832 [2024-10-30 10:51:19.176692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:57.832 [2024-10-30 10:51:19.176824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.832 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:57.833 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.833 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.833 "name": "raid_bdev1", 00:22:57.833 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:22:57.833 "strip_size_kb": 0, 00:22:57.833 "state": "online", 00:22:57.833 "raid_level": "raid1", 00:22:57.833 "superblock": true, 00:22:57.833 "num_base_bdevs": 2, 00:22:57.833 "num_base_bdevs_discovered": 2, 00:22:57.833 "num_base_bdevs_operational": 2, 00:22:57.833 "base_bdevs_list": [ 00:22:57.833 { 00:22:57.833 "name": "pt1", 00:22:57.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:57.833 "is_configured": true, 00:22:57.833 "data_offset": 256, 00:22:57.833 "data_size": 7936 00:22:57.833 }, 00:22:57.833 { 00:22:57.833 "name": "pt2", 00:22:57.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:57.833 "is_configured": true, 00:22:57.833 "data_offset": 256, 00:22:57.833 "data_size": 7936 00:22:57.833 } 00:22:57.833 ] 00:22:57.833 }' 00:22:57.833 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.833 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:58.401 [2024-10-30 10:51:19.725737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:58.401 "name": "raid_bdev1", 00:22:58.401 "aliases": [ 00:22:58.401 "31313212-7b58-4123-ac9c-e7073cb9cb9d" 00:22:58.401 ], 00:22:58.401 "product_name": "Raid Volume", 00:22:58.401 "block_size": 4096, 00:22:58.401 "num_blocks": 7936, 00:22:58.401 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:22:58.401 "md_size": 32, 00:22:58.401 "md_interleave": false, 00:22:58.401 "dif_type": 0, 00:22:58.401 "assigned_rate_limits": { 00:22:58.401 "rw_ios_per_sec": 0, 00:22:58.401 "rw_mbytes_per_sec": 0, 00:22:58.401 "r_mbytes_per_sec": 0, 00:22:58.401 "w_mbytes_per_sec": 0 00:22:58.401 }, 00:22:58.401 "claimed": false, 00:22:58.401 "zoned": false, 00:22:58.401 "supported_io_types": { 00:22:58.401 "read": true, 00:22:58.401 "write": true, 00:22:58.401 "unmap": false, 00:22:58.401 "flush": false, 00:22:58.401 "reset": true, 00:22:58.401 "nvme_admin": false, 00:22:58.401 "nvme_io": false, 00:22:58.401 "nvme_io_md": false, 00:22:58.401 "write_zeroes": true, 00:22:58.401 "zcopy": false, 00:22:58.401 "get_zone_info": false, 00:22:58.401 "zone_management": false, 00:22:58.401 "zone_append": false, 00:22:58.401 "compare": false, 00:22:58.401 "compare_and_write": false, 00:22:58.401 "abort": false, 00:22:58.401 "seek_hole": false, 00:22:58.401 "seek_data": false, 00:22:58.401 "copy": false, 00:22:58.401 "nvme_iov_md": false 00:22:58.401 }, 00:22:58.401 "memory_domains": [ 00:22:58.401 { 00:22:58.401 "dma_device_id": "system", 00:22:58.401 "dma_device_type": 1 00:22:58.401 }, 00:22:58.401 { 00:22:58.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.401 "dma_device_type": 2 00:22:58.401 }, 00:22:58.401 { 00:22:58.401 "dma_device_id": "system", 00:22:58.401 "dma_device_type": 1 00:22:58.401 }, 00:22:58.401 { 00:22:58.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.401 "dma_device_type": 2 00:22:58.401 } 00:22:58.401 ], 00:22:58.401 "driver_specific": { 00:22:58.401 "raid": { 00:22:58.401 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:22:58.401 "strip_size_kb": 0, 00:22:58.401 "state": "online", 00:22:58.401 "raid_level": "raid1", 00:22:58.401 "superblock": true, 00:22:58.401 "num_base_bdevs": 2, 00:22:58.401 "num_base_bdevs_discovered": 2, 00:22:58.401 "num_base_bdevs_operational": 2, 00:22:58.401 "base_bdevs_list": [ 00:22:58.401 { 00:22:58.401 "name": "pt1", 00:22:58.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:58.401 "is_configured": true, 00:22:58.401 "data_offset": 256, 00:22:58.401 "data_size": 7936 00:22:58.401 }, 00:22:58.401 { 00:22:58.401 "name": "pt2", 00:22:58.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.401 "is_configured": true, 00:22:58.401 "data_offset": 256, 00:22:58.401 "data_size": 7936 00:22:58.401 } 00:22:58.401 ] 00:22:58.401 } 00:22:58.401 } 00:22:58.401 }' 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:58.401 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:58.401 pt2' 00:22:58.402 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.661 10:51:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:58.661 [2024-10-30 10:51:19.985744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=31313212-7b58-4123-ac9c-e7073cb9cb9d 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 31313212-7b58-4123-ac9c-e7073cb9cb9d ']' 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.661 [2024-10-30 10:51:20.045426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:58.661 [2024-10-30 10:51:20.045475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:58.661 [2024-10-30 10:51:20.045592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:58.661 [2024-10-30 10:51:20.045669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:58.661 [2024-10-30 10:51:20.045689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.661 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.921 [2024-10-30 10:51:20.197481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:58.921 [2024-10-30 10:51:20.200346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:58.921 [2024-10-30 10:51:20.200589] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:58.921 [2024-10-30 10:51:20.200878] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:58.921 [2024-10-30 10:51:20.201085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:58.921 [2024-10-30 10:51:20.201239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:58.921 request: 00:22:58.921 { 00:22:58.921 "name": "raid_bdev1", 00:22:58.921 "raid_level": "raid1", 00:22:58.921 "base_bdevs": [ 00:22:58.921 "malloc1", 00:22:58.921 "malloc2" 00:22:58.921 ], 00:22:58.921 "superblock": false, 00:22:58.921 "method": "bdev_raid_create", 00:22:58.921 "req_id": 1 00:22:58.921 } 00:22:58.921 Got JSON-RPC error response 00:22:58.921 response: 00:22:58.921 { 00:22:58.921 "code": -17, 00:22:58.921 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:58.921 } 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.921 [2024-10-30 10:51:20.269653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:58.921 [2024-10-30 10:51:20.269865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.921 [2024-10-30 10:51:20.269935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:58.921 [2024-10-30 10:51:20.270155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.921 [2024-10-30 10:51:20.273052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.921 [2024-10-30 10:51:20.273103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:58.921 [2024-10-30 10:51:20.273168] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:58.921 [2024-10-30 10:51:20.273247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:58.921 pt1 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.921 "name": "raid_bdev1", 00:22:58.921 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:22:58.921 "strip_size_kb": 0, 00:22:58.921 "state": "configuring", 00:22:58.921 "raid_level": "raid1", 00:22:58.921 "superblock": true, 00:22:58.921 "num_base_bdevs": 2, 00:22:58.921 "num_base_bdevs_discovered": 1, 00:22:58.921 "num_base_bdevs_operational": 2, 00:22:58.921 "base_bdevs_list": [ 00:22:58.921 { 00:22:58.921 "name": "pt1", 00:22:58.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:58.921 "is_configured": true, 00:22:58.921 "data_offset": 256, 00:22:58.921 "data_size": 7936 00:22:58.921 }, 00:22:58.921 { 00:22:58.921 "name": null, 00:22:58.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:58.921 "is_configured": false, 00:22:58.921 "data_offset": 256, 00:22:58.921 "data_size": 7936 00:22:58.921 } 00:22:58.921 ] 00:22:58.921 }' 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.921 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:59.491 [2024-10-30 10:51:20.833872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:59.491 [2024-10-30 10:51:20.833966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.491 [2024-10-30 10:51:20.834011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:59.491 [2024-10-30 10:51:20.834031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.491 [2024-10-30 10:51:20.834316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.491 [2024-10-30 10:51:20.834346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:59.491 [2024-10-30 10:51:20.834427] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:59.491 [2024-10-30 10:51:20.834461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.491 [2024-10-30 10:51:20.834599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:59.491 [2024-10-30 10:51:20.834619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:59.491 [2024-10-30 10:51:20.834721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:59.491 [2024-10-30 10:51:20.834912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:59.491 [2024-10-30 10:51:20.834928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:59.491 [2024-10-30 10:51:20.835069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.491 pt2 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.491 "name": "raid_bdev1", 00:22:59.491 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:22:59.491 "strip_size_kb": 0, 00:22:59.491 "state": "online", 00:22:59.491 "raid_level": "raid1", 00:22:59.491 "superblock": true, 00:22:59.491 "num_base_bdevs": 2, 00:22:59.491 "num_base_bdevs_discovered": 2, 00:22:59.491 "num_base_bdevs_operational": 2, 00:22:59.491 "base_bdevs_list": [ 00:22:59.491 { 00:22:59.491 "name": "pt1", 00:22:59.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:59.491 "is_configured": true, 00:22:59.491 "data_offset": 256, 00:22:59.491 "data_size": 7936 00:22:59.491 }, 00:22:59.491 { 00:22:59.491 "name": "pt2", 00:22:59.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.491 "is_configured": true, 00:22:59.491 "data_offset": 256, 00:22:59.491 "data_size": 7936 00:22:59.491 } 00:22:59.491 ] 00:22:59.491 }' 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.491 10:51:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.058 [2024-10-30 10:51:21.386364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.058 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:00.058 "name": "raid_bdev1", 00:23:00.058 "aliases": [ 00:23:00.058 "31313212-7b58-4123-ac9c-e7073cb9cb9d" 00:23:00.058 ], 00:23:00.059 "product_name": "Raid Volume", 00:23:00.059 "block_size": 4096, 00:23:00.059 "num_blocks": 7936, 00:23:00.059 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:23:00.059 "md_size": 32, 00:23:00.059 "md_interleave": false, 00:23:00.059 "dif_type": 0, 00:23:00.059 "assigned_rate_limits": { 00:23:00.059 "rw_ios_per_sec": 0, 00:23:00.059 "rw_mbytes_per_sec": 0, 00:23:00.059 "r_mbytes_per_sec": 0, 00:23:00.059 "w_mbytes_per_sec": 0 00:23:00.059 }, 00:23:00.059 "claimed": false, 00:23:00.059 "zoned": false, 00:23:00.059 "supported_io_types": { 00:23:00.059 "read": true, 00:23:00.059 "write": true, 00:23:00.059 "unmap": false, 00:23:00.059 "flush": false, 00:23:00.059 "reset": true, 00:23:00.059 "nvme_admin": false, 00:23:00.059 "nvme_io": false, 00:23:00.059 "nvme_io_md": false, 00:23:00.059 "write_zeroes": true, 00:23:00.059 "zcopy": false, 00:23:00.059 "get_zone_info": false, 00:23:00.059 "zone_management": false, 00:23:00.059 "zone_append": false, 00:23:00.059 "compare": false, 00:23:00.059 "compare_and_write": false, 00:23:00.059 "abort": false, 00:23:00.059 "seek_hole": false, 00:23:00.059 "seek_data": false, 00:23:00.059 "copy": false, 00:23:00.059 "nvme_iov_md": false 00:23:00.059 }, 00:23:00.059 "memory_domains": [ 00:23:00.059 { 00:23:00.059 "dma_device_id": "system", 00:23:00.059 "dma_device_type": 1 00:23:00.059 }, 00:23:00.059 { 00:23:00.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.059 "dma_device_type": 2 00:23:00.059 }, 00:23:00.059 { 00:23:00.059 "dma_device_id": "system", 00:23:00.059 "dma_device_type": 1 00:23:00.059 }, 00:23:00.059 { 00:23:00.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.059 "dma_device_type": 2 00:23:00.059 } 00:23:00.059 ], 00:23:00.059 "driver_specific": { 00:23:00.059 "raid": { 00:23:00.059 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:23:00.059 "strip_size_kb": 0, 00:23:00.059 "state": "online", 00:23:00.059 "raid_level": "raid1", 00:23:00.059 "superblock": true, 00:23:00.059 "num_base_bdevs": 2, 00:23:00.059 "num_base_bdevs_discovered": 2, 00:23:00.059 "num_base_bdevs_operational": 2, 00:23:00.059 "base_bdevs_list": [ 00:23:00.059 { 00:23:00.059 "name": "pt1", 00:23:00.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:00.059 "is_configured": true, 00:23:00.059 "data_offset": 256, 00:23:00.059 "data_size": 7936 00:23:00.059 }, 00:23:00.059 { 00:23:00.059 "name": "pt2", 00:23:00.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.059 "is_configured": true, 00:23:00.059 "data_offset": 256, 00:23:00.059 "data_size": 7936 00:23:00.059 } 00:23:00.059 ] 00:23:00.059 } 00:23:00.059 } 00:23:00.059 }' 00:23:00.059 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:00.059 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:00.059 pt2' 00:23:00.059 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:00.318 [2024-10-30 10:51:21.646466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 31313212-7b58-4123-ac9c-e7073cb9cb9d '!=' 31313212-7b58-4123-ac9c-e7073cb9cb9d ']' 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.318 [2024-10-30 10:51:21.698204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.318 "name": "raid_bdev1", 00:23:00.318 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:23:00.318 "strip_size_kb": 0, 00:23:00.318 "state": "online", 00:23:00.318 "raid_level": "raid1", 00:23:00.318 "superblock": true, 00:23:00.318 "num_base_bdevs": 2, 00:23:00.318 "num_base_bdevs_discovered": 1, 00:23:00.318 "num_base_bdevs_operational": 1, 00:23:00.318 "base_bdevs_list": [ 00:23:00.318 { 00:23:00.318 "name": null, 00:23:00.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.318 "is_configured": false, 00:23:00.318 "data_offset": 0, 00:23:00.318 "data_size": 7936 00:23:00.318 }, 00:23:00.318 { 00:23:00.318 "name": "pt2", 00:23:00.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.318 "is_configured": true, 00:23:00.318 "data_offset": 256, 00:23:00.318 "data_size": 7936 00:23:00.318 } 00:23:00.318 ] 00:23:00.318 }' 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.318 10:51:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.887 [2024-10-30 10:51:22.246385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:00.887 [2024-10-30 10:51:22.246417] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:00.887 [2024-10-30 10:51:22.246542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:00.887 [2024-10-30 10:51:22.246615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:00.887 [2024-10-30 10:51:22.246633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.887 [2024-10-30 10:51:22.318362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:00.887 [2024-10-30 10:51:22.318461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.887 [2024-10-30 10:51:22.318485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:00.887 [2024-10-30 10:51:22.318500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.887 [2024-10-30 10:51:22.321206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.887 [2024-10-30 10:51:22.321253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:00.887 [2024-10-30 10:51:22.321315] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:00.887 [2024-10-30 10:51:22.321387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:00.887 [2024-10-30 10:51:22.321498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:00.887 [2024-10-30 10:51:22.321518] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:00.887 [2024-10-30 10:51:22.321600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:00.887 [2024-10-30 10:51:22.321773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:00.887 [2024-10-30 10:51:22.321789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:00.887 [2024-10-30 10:51:22.321898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.887 pt2 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:00.887 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.147 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.147 "name": "raid_bdev1", 00:23:01.147 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:23:01.147 "strip_size_kb": 0, 00:23:01.147 "state": "online", 00:23:01.147 "raid_level": "raid1", 00:23:01.147 "superblock": true, 00:23:01.147 "num_base_bdevs": 2, 00:23:01.147 "num_base_bdevs_discovered": 1, 00:23:01.147 "num_base_bdevs_operational": 1, 00:23:01.147 "base_bdevs_list": [ 00:23:01.147 { 00:23:01.147 "name": null, 00:23:01.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.147 "is_configured": false, 00:23:01.147 "data_offset": 256, 00:23:01.147 "data_size": 7936 00:23:01.147 }, 00:23:01.147 { 00:23:01.147 "name": "pt2", 00:23:01.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:01.147 "is_configured": true, 00:23:01.147 "data_offset": 256, 00:23:01.147 "data_size": 7936 00:23:01.147 } 00:23:01.147 ] 00:23:01.147 }' 00:23:01.147 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.147 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:01.406 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:01.406 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.406 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:01.406 [2024-10-30 10:51:22.846521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:01.406 [2024-10-30 10:51:22.846721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:01.406 [2024-10-30 10:51:22.846830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.406 [2024-10-30 10:51:22.846898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.406 [2024-10-30 10:51:22.846913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:01.406 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.406 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.406 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.406 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:01.406 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:01.406 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:01.666 [2024-10-30 10:51:22.910609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:01.666 [2024-10-30 10:51:22.910843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.666 [2024-10-30 10:51:22.910888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:01.666 [2024-10-30 10:51:22.910905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.666 [2024-10-30 10:51:22.913898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.666 [2024-10-30 10:51:22.914154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:01.666 [2024-10-30 10:51:22.914249] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:01.666 [2024-10-30 10:51:22.914311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:01.666 [2024-10-30 10:51:22.914531] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:01.666 [2024-10-30 10:51:22.914565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:01.666 [2024-10-30 10:51:22.914590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:01.666 [2024-10-30 10:51:22.914667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:01.666 [2024-10-30 10:51:22.914834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:01.666 [2024-10-30 10:51:22.914850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:01.666 [2024-10-30 10:51:22.914975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:01.666 pt1 00:23:01.666 [2024-10-30 10:51:22.915175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:01.666 [2024-10-30 10:51:22.915197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:01.666 [2024-10-30 10:51:22.915346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.666 "name": "raid_bdev1", 00:23:01.666 "uuid": "31313212-7b58-4123-ac9c-e7073cb9cb9d", 00:23:01.666 "strip_size_kb": 0, 00:23:01.666 "state": "online", 00:23:01.666 "raid_level": "raid1", 00:23:01.666 "superblock": true, 00:23:01.666 "num_base_bdevs": 2, 00:23:01.666 "num_base_bdevs_discovered": 1, 00:23:01.666 "num_base_bdevs_operational": 1, 00:23:01.666 "base_bdevs_list": [ 00:23:01.666 { 00:23:01.666 "name": null, 00:23:01.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.666 "is_configured": false, 00:23:01.666 "data_offset": 256, 00:23:01.666 "data_size": 7936 00:23:01.666 }, 00:23:01.666 { 00:23:01.666 "name": "pt2", 00:23:01.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:01.666 "is_configured": true, 00:23:01.666 "data_offset": 256, 00:23:01.666 "data_size": 7936 00:23:01.666 } 00:23:01.666 ] 00:23:01.666 }' 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.666 10:51:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:02.234 [2024-10-30 10:51:23.503219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 31313212-7b58-4123-ac9c-e7073cb9cb9d '!=' 31313212-7b58-4123-ac9c-e7073cb9cb9d ']' 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88049 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88049 ']' 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 88049 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88049 00:23:02.234 killing process with pid 88049 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88049' 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 88049 00:23:02.234 [2024-10-30 10:51:23.583809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:02.234 10:51:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 88049 00:23:02.234 [2024-10-30 10:51:23.583948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:02.234 [2024-10-30 10:51:23.584029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:02.235 [2024-10-30 10:51:23.584054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:02.494 [2024-10-30 10:51:23.787368] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:03.431 10:51:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:23:03.431 00:23:03.431 real 0m6.861s 00:23:03.431 user 0m10.932s 00:23:03.431 sys 0m0.980s 00:23:03.431 10:51:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:03.431 10:51:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:03.431 ************************************ 00:23:03.431 END TEST raid_superblock_test_md_separate 00:23:03.431 ************************************ 00:23:03.431 10:51:24 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:23:03.431 10:51:24 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:23:03.431 10:51:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:03.431 10:51:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:03.431 10:51:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:03.431 ************************************ 00:23:03.431 START TEST raid_rebuild_test_sb_md_separate 00:23:03.431 ************************************ 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88377 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88377 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88377 ']' 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:03.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:03.431 10:51:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:03.688 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:03.688 Zero copy mechanism will not be used. 00:23:03.688 [2024-10-30 10:51:24.962722] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:23:03.688 [2024-10-30 10:51:24.962850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88377 ] 00:23:03.688 [2024-10-30 10:51:25.140919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.946 [2024-10-30 10:51:25.278211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.205 [2024-10-30 10:51:25.484271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.205 [2024-10-30 10:51:25.484342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:04.794 10:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:04.794 10:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:23:04.794 10:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:04.794 10:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:23:04.794 10:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.794 10:51:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.794 BaseBdev1_malloc 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.794 [2024-10-30 10:51:26.011909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:04.794 [2024-10-30 10:51:26.012049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.794 [2024-10-30 10:51:26.012082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:04.794 [2024-10-30 10:51:26.012100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.794 [2024-10-30 10:51:26.014793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.794 [2024-10-30 10:51:26.014854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:04.794 BaseBdev1 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.794 BaseBdev2_malloc 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.794 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.794 [2024-10-30 10:51:26.072241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:04.794 [2024-10-30 10:51:26.072548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.795 [2024-10-30 10:51:26.072587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:04.795 [2024-10-30 10:51:26.072609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.795 [2024-10-30 10:51:26.075410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.795 [2024-10-30 10:51:26.075462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:04.795 BaseBdev2 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.795 spare_malloc 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.795 spare_delay 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.795 [2024-10-30 10:51:26.147928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:04.795 [2024-10-30 10:51:26.148063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.795 [2024-10-30 10:51:26.148096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:04.795 [2024-10-30 10:51:26.148114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.795 [2024-10-30 10:51:26.150816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.795 [2024-10-30 10:51:26.150895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:04.795 spare 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.795 [2024-10-30 10:51:26.159959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:04.795 [2024-10-30 10:51:26.162699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:04.795 [2024-10-30 10:51:26.163022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:04.795 [2024-10-30 10:51:26.163065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:04.795 [2024-10-30 10:51:26.163173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:04.795 [2024-10-30 10:51:26.163347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:04.795 [2024-10-30 10:51:26.163363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:04.795 [2024-10-30 10:51:26.163537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.795 "name": "raid_bdev1", 00:23:04.795 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:04.795 "strip_size_kb": 0, 00:23:04.795 "state": "online", 00:23:04.795 "raid_level": "raid1", 00:23:04.795 "superblock": true, 00:23:04.795 "num_base_bdevs": 2, 00:23:04.795 "num_base_bdevs_discovered": 2, 00:23:04.795 "num_base_bdevs_operational": 2, 00:23:04.795 "base_bdevs_list": [ 00:23:04.795 { 00:23:04.795 "name": "BaseBdev1", 00:23:04.795 "uuid": "6e60a59f-76e3-50a6-bcf0-9df4b20338f2", 00:23:04.795 "is_configured": true, 00:23:04.795 "data_offset": 256, 00:23:04.795 "data_size": 7936 00:23:04.795 }, 00:23:04.795 { 00:23:04.795 "name": "BaseBdev2", 00:23:04.795 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:04.795 "is_configured": true, 00:23:04.795 "data_offset": 256, 00:23:04.795 "data_size": 7936 00:23:04.795 } 00:23:04.795 ] 00:23:04.795 }' 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.795 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:05.361 [2024-10-30 10:51:26.648612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:05.361 10:51:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:05.619 [2024-10-30 10:51:27.032446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:05.619 /dev/nbd0 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.619 1+0 records in 00:23:05.619 1+0 records out 00:23:05.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289658 s, 14.1 MB/s 00:23:05.619 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.877 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:23:05.877 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.877 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:05.877 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:23:05.877 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:05.877 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:05.877 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:05.877 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:05.877 10:51:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:23:06.811 7936+0 records in 00:23:06.811 7936+0 records out 00:23:06.811 32505856 bytes (33 MB, 31 MiB) copied, 0.931991 s, 34.9 MB/s 00:23:06.811 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:06.811 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:06.811 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:06.811 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:06.811 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:06.811 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:06.811 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:07.070 [2024-10-30 10:51:28.345904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.070 [2024-10-30 10:51:28.362076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:07.070 "name": "raid_bdev1", 00:23:07.070 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:07.070 "strip_size_kb": 0, 00:23:07.070 "state": "online", 00:23:07.070 "raid_level": "raid1", 00:23:07.070 "superblock": true, 00:23:07.070 "num_base_bdevs": 2, 00:23:07.070 "num_base_bdevs_discovered": 1, 00:23:07.070 "num_base_bdevs_operational": 1, 00:23:07.070 "base_bdevs_list": [ 00:23:07.070 { 00:23:07.070 "name": null, 00:23:07.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.070 "is_configured": false, 00:23:07.070 "data_offset": 0, 00:23:07.070 "data_size": 7936 00:23:07.070 }, 00:23:07.070 { 00:23:07.070 "name": "BaseBdev2", 00:23:07.070 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:07.070 "is_configured": true, 00:23:07.070 "data_offset": 256, 00:23:07.070 "data_size": 7936 00:23:07.070 } 00:23:07.070 ] 00:23:07.070 }' 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:07.070 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.637 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:07.637 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.637 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:07.638 [2024-10-30 10:51:28.862322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:07.638 [2024-10-30 10:51:28.877144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:23:07.638 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.638 10:51:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:07.638 [2024-10-30 10:51:28.879843] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.574 "name": "raid_bdev1", 00:23:08.574 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:08.574 "strip_size_kb": 0, 00:23:08.574 "state": "online", 00:23:08.574 "raid_level": "raid1", 00:23:08.574 "superblock": true, 00:23:08.574 "num_base_bdevs": 2, 00:23:08.574 "num_base_bdevs_discovered": 2, 00:23:08.574 "num_base_bdevs_operational": 2, 00:23:08.574 "process": { 00:23:08.574 "type": "rebuild", 00:23:08.574 "target": "spare", 00:23:08.574 "progress": { 00:23:08.574 "blocks": 2560, 00:23:08.574 "percent": 32 00:23:08.574 } 00:23:08.574 }, 00:23:08.574 "base_bdevs_list": [ 00:23:08.574 { 00:23:08.574 "name": "spare", 00:23:08.574 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:08.574 "is_configured": true, 00:23:08.574 "data_offset": 256, 00:23:08.574 "data_size": 7936 00:23:08.574 }, 00:23:08.574 { 00:23:08.574 "name": "BaseBdev2", 00:23:08.574 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:08.574 "is_configured": true, 00:23:08.574 "data_offset": 256, 00:23:08.574 "data_size": 7936 00:23:08.574 } 00:23:08.574 ] 00:23:08.574 }' 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:08.574 10:51:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.832 [2024-10-30 10:51:30.057296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:08.832 [2024-10-30 10:51:30.089290] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:08.832 [2024-10-30 10:51:30.089372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.832 [2024-10-30 10:51:30.089397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:08.832 [2024-10-30 10:51:30.089413] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.832 "name": "raid_bdev1", 00:23:08.832 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:08.832 "strip_size_kb": 0, 00:23:08.832 "state": "online", 00:23:08.832 "raid_level": "raid1", 00:23:08.832 "superblock": true, 00:23:08.832 "num_base_bdevs": 2, 00:23:08.832 "num_base_bdevs_discovered": 1, 00:23:08.832 "num_base_bdevs_operational": 1, 00:23:08.832 "base_bdevs_list": [ 00:23:08.832 { 00:23:08.832 "name": null, 00:23:08.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.832 "is_configured": false, 00:23:08.832 "data_offset": 0, 00:23:08.832 "data_size": 7936 00:23:08.832 }, 00:23:08.832 { 00:23:08.832 "name": "BaseBdev2", 00:23:08.832 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:08.832 "is_configured": true, 00:23:08.832 "data_offset": 256, 00:23:08.832 "data_size": 7936 00:23:08.832 } 00:23:08.832 ] 00:23:08.832 }' 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.832 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:09.399 "name": "raid_bdev1", 00:23:09.399 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:09.399 "strip_size_kb": 0, 00:23:09.399 "state": "online", 00:23:09.399 "raid_level": "raid1", 00:23:09.399 "superblock": true, 00:23:09.399 "num_base_bdevs": 2, 00:23:09.399 "num_base_bdevs_discovered": 1, 00:23:09.399 "num_base_bdevs_operational": 1, 00:23:09.399 "base_bdevs_list": [ 00:23:09.399 { 00:23:09.399 "name": null, 00:23:09.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.399 "is_configured": false, 00:23:09.399 "data_offset": 0, 00:23:09.399 "data_size": 7936 00:23:09.399 }, 00:23:09.399 { 00:23:09.399 "name": "BaseBdev2", 00:23:09.399 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:09.399 "is_configured": true, 00:23:09.399 "data_offset": 256, 00:23:09.399 "data_size": 7936 00:23:09.399 } 00:23:09.399 ] 00:23:09.399 }' 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:09.399 [2024-10-30 10:51:30.805215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:09.399 [2024-10-30 10:51:30.819042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.399 10:51:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:09.399 [2024-10-30 10:51:30.821799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.778 "name": "raid_bdev1", 00:23:10.778 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:10.778 "strip_size_kb": 0, 00:23:10.778 "state": "online", 00:23:10.778 "raid_level": "raid1", 00:23:10.778 "superblock": true, 00:23:10.778 "num_base_bdevs": 2, 00:23:10.778 "num_base_bdevs_discovered": 2, 00:23:10.778 "num_base_bdevs_operational": 2, 00:23:10.778 "process": { 00:23:10.778 "type": "rebuild", 00:23:10.778 "target": "spare", 00:23:10.778 "progress": { 00:23:10.778 "blocks": 2560, 00:23:10.778 "percent": 32 00:23:10.778 } 00:23:10.778 }, 00:23:10.778 "base_bdevs_list": [ 00:23:10.778 { 00:23:10.778 "name": "spare", 00:23:10.778 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:10.778 "is_configured": true, 00:23:10.778 "data_offset": 256, 00:23:10.778 "data_size": 7936 00:23:10.778 }, 00:23:10.778 { 00:23:10.778 "name": "BaseBdev2", 00:23:10.778 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:10.778 "is_configured": true, 00:23:10.778 "data_offset": 256, 00:23:10.778 "data_size": 7936 00:23:10.778 } 00:23:10.778 ] 00:23:10.778 }' 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:10.778 10:51:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:10.778 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=766 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.778 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.778 "name": "raid_bdev1", 00:23:10.778 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:10.778 "strip_size_kb": 0, 00:23:10.778 "state": "online", 00:23:10.778 "raid_level": "raid1", 00:23:10.778 "superblock": true, 00:23:10.778 "num_base_bdevs": 2, 00:23:10.778 "num_base_bdevs_discovered": 2, 00:23:10.779 "num_base_bdevs_operational": 2, 00:23:10.779 "process": { 00:23:10.779 "type": "rebuild", 00:23:10.779 "target": "spare", 00:23:10.779 "progress": { 00:23:10.779 "blocks": 2816, 00:23:10.779 "percent": 35 00:23:10.779 } 00:23:10.779 }, 00:23:10.779 "base_bdevs_list": [ 00:23:10.779 { 00:23:10.779 "name": "spare", 00:23:10.779 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:10.779 "is_configured": true, 00:23:10.779 "data_offset": 256, 00:23:10.779 "data_size": 7936 00:23:10.779 }, 00:23:10.779 { 00:23:10.779 "name": "BaseBdev2", 00:23:10.779 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:10.779 "is_configured": true, 00:23:10.779 "data_offset": 256, 00:23:10.779 "data_size": 7936 00:23:10.779 } 00:23:10.779 ] 00:23:10.779 }' 00:23:10.779 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:10.779 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:10.779 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:10.779 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:10.779 10:51:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:11.716 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.976 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.976 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:11.976 "name": "raid_bdev1", 00:23:11.976 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:11.976 "strip_size_kb": 0, 00:23:11.976 "state": "online", 00:23:11.976 "raid_level": "raid1", 00:23:11.976 "superblock": true, 00:23:11.976 "num_base_bdevs": 2, 00:23:11.976 "num_base_bdevs_discovered": 2, 00:23:11.976 "num_base_bdevs_operational": 2, 00:23:11.976 "process": { 00:23:11.976 "type": "rebuild", 00:23:11.976 "target": "spare", 00:23:11.976 "progress": { 00:23:11.976 "blocks": 5888, 00:23:11.976 "percent": 74 00:23:11.976 } 00:23:11.976 }, 00:23:11.976 "base_bdevs_list": [ 00:23:11.976 { 00:23:11.976 "name": "spare", 00:23:11.976 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:11.976 "is_configured": true, 00:23:11.976 "data_offset": 256, 00:23:11.976 "data_size": 7936 00:23:11.976 }, 00:23:11.976 { 00:23:11.976 "name": "BaseBdev2", 00:23:11.976 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:11.976 "is_configured": true, 00:23:11.976 "data_offset": 256, 00:23:11.976 "data_size": 7936 00:23:11.976 } 00:23:11.976 ] 00:23:11.976 }' 00:23:11.976 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:11.976 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:11.976 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:11.976 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.976 10:51:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:12.545 [2024-10-30 10:51:33.946080] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:12.545 [2024-10-30 10:51:33.946191] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:12.545 [2024-10-30 10:51:33.946346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:13.115 "name": "raid_bdev1", 00:23:13.115 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:13.115 "strip_size_kb": 0, 00:23:13.115 "state": "online", 00:23:13.115 "raid_level": "raid1", 00:23:13.115 "superblock": true, 00:23:13.115 "num_base_bdevs": 2, 00:23:13.115 "num_base_bdevs_discovered": 2, 00:23:13.115 "num_base_bdevs_operational": 2, 00:23:13.115 "base_bdevs_list": [ 00:23:13.115 { 00:23:13.115 "name": "spare", 00:23:13.115 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:13.115 "is_configured": true, 00:23:13.115 "data_offset": 256, 00:23:13.115 "data_size": 7936 00:23:13.115 }, 00:23:13.115 { 00:23:13.115 "name": "BaseBdev2", 00:23:13.115 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:13.115 "is_configured": true, 00:23:13.115 "data_offset": 256, 00:23:13.115 "data_size": 7936 00:23:13.115 } 00:23:13.115 ] 00:23:13.115 }' 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:13.115 "name": "raid_bdev1", 00:23:13.115 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:13.115 "strip_size_kb": 0, 00:23:13.115 "state": "online", 00:23:13.115 "raid_level": "raid1", 00:23:13.115 "superblock": true, 00:23:13.115 "num_base_bdevs": 2, 00:23:13.115 "num_base_bdevs_discovered": 2, 00:23:13.115 "num_base_bdevs_operational": 2, 00:23:13.115 "base_bdevs_list": [ 00:23:13.115 { 00:23:13.115 "name": "spare", 00:23:13.115 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:13.115 "is_configured": true, 00:23:13.115 "data_offset": 256, 00:23:13.115 "data_size": 7936 00:23:13.115 }, 00:23:13.115 { 00:23:13.115 "name": "BaseBdev2", 00:23:13.115 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:13.115 "is_configured": true, 00:23:13.115 "data_offset": 256, 00:23:13.115 "data_size": 7936 00:23:13.115 } 00:23:13.115 ] 00:23:13.115 }' 00:23:13.115 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:13.377 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.378 "name": "raid_bdev1", 00:23:13.378 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:13.378 "strip_size_kb": 0, 00:23:13.378 "state": "online", 00:23:13.378 "raid_level": "raid1", 00:23:13.378 "superblock": true, 00:23:13.378 "num_base_bdevs": 2, 00:23:13.378 "num_base_bdevs_discovered": 2, 00:23:13.378 "num_base_bdevs_operational": 2, 00:23:13.378 "base_bdevs_list": [ 00:23:13.378 { 00:23:13.378 "name": "spare", 00:23:13.378 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:13.378 "is_configured": true, 00:23:13.378 "data_offset": 256, 00:23:13.378 "data_size": 7936 00:23:13.378 }, 00:23:13.378 { 00:23:13.378 "name": "BaseBdev2", 00:23:13.378 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:13.378 "is_configured": true, 00:23:13.378 "data_offset": 256, 00:23:13.378 "data_size": 7936 00:23:13.378 } 00:23:13.378 ] 00:23:13.378 }' 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.378 10:51:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.948 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:13.948 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.948 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.948 [2024-10-30 10:51:35.193732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:13.948 [2024-10-30 10:51:35.193939] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:13.948 [2024-10-30 10:51:35.194222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.948 [2024-10-30 10:51:35.194463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.948 [2024-10-30 10:51:35.194636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:13.948 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.948 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:13.949 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:14.208 /dev/nbd0 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:14.208 1+0 records in 00:23:14.208 1+0 records out 00:23:14.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311585 s, 13.1 MB/s 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:14.208 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:14.467 /dev/nbd1 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:14.727 1+0 records in 00:23:14.727 1+0 records out 00:23:14.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422948 s, 9.7 MB/s 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:14.727 10:51:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:14.727 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:14.727 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:14.727 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:14.727 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:14.727 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:23:14.727 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:14.727 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:15.295 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.555 [2024-10-30 10:51:36.844715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:15.555 [2024-10-30 10:51:36.844780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.555 [2024-10-30 10:51:36.844825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:15.555 [2024-10-30 10:51:36.844840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.555 [2024-10-30 10:51:36.847604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.555 [2024-10-30 10:51:36.847650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:15.555 [2024-10-30 10:51:36.847737] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:15.555 [2024-10-30 10:51:36.847809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:15.555 [2024-10-30 10:51:36.848042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:15.555 spare 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.555 [2024-10-30 10:51:36.948164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:15.555 [2024-10-30 10:51:36.948373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:15.555 [2024-10-30 10:51:36.948513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:23:15.555 [2024-10-30 10:51:36.948691] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:15.555 [2024-10-30 10:51:36.948707] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:15.555 [2024-10-30 10:51:36.948902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:15.555 10:51:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.555 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.555 "name": "raid_bdev1", 00:23:15.555 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:15.555 "strip_size_kb": 0, 00:23:15.555 "state": "online", 00:23:15.555 "raid_level": "raid1", 00:23:15.555 "superblock": true, 00:23:15.555 "num_base_bdevs": 2, 00:23:15.555 "num_base_bdevs_discovered": 2, 00:23:15.555 "num_base_bdevs_operational": 2, 00:23:15.555 "base_bdevs_list": [ 00:23:15.555 { 00:23:15.555 "name": "spare", 00:23:15.555 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:15.555 "is_configured": true, 00:23:15.555 "data_offset": 256, 00:23:15.555 "data_size": 7936 00:23:15.555 }, 00:23:15.555 { 00:23:15.555 "name": "BaseBdev2", 00:23:15.555 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:15.555 "is_configured": true, 00:23:15.555 "data_offset": 256, 00:23:15.555 "data_size": 7936 00:23:15.555 } 00:23:15.555 ] 00:23:15.555 }' 00:23:15.555 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.555 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:16.124 "name": "raid_bdev1", 00:23:16.124 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:16.124 "strip_size_kb": 0, 00:23:16.124 "state": "online", 00:23:16.124 "raid_level": "raid1", 00:23:16.124 "superblock": true, 00:23:16.124 "num_base_bdevs": 2, 00:23:16.124 "num_base_bdevs_discovered": 2, 00:23:16.124 "num_base_bdevs_operational": 2, 00:23:16.124 "base_bdevs_list": [ 00:23:16.124 { 00:23:16.124 "name": "spare", 00:23:16.124 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:16.124 "is_configured": true, 00:23:16.124 "data_offset": 256, 00:23:16.124 "data_size": 7936 00:23:16.124 }, 00:23:16.124 { 00:23:16.124 "name": "BaseBdev2", 00:23:16.124 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:16.124 "is_configured": true, 00:23:16.124 "data_offset": 256, 00:23:16.124 "data_size": 7936 00:23:16.124 } 00:23:16.124 ] 00:23:16.124 }' 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:16.124 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.384 [2024-10-30 10:51:37.685157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:16.384 "name": "raid_bdev1", 00:23:16.384 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:16.384 "strip_size_kb": 0, 00:23:16.384 "state": "online", 00:23:16.384 "raid_level": "raid1", 00:23:16.384 "superblock": true, 00:23:16.384 "num_base_bdevs": 2, 00:23:16.384 "num_base_bdevs_discovered": 1, 00:23:16.384 "num_base_bdevs_operational": 1, 00:23:16.384 "base_bdevs_list": [ 00:23:16.384 { 00:23:16.384 "name": null, 00:23:16.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.384 "is_configured": false, 00:23:16.384 "data_offset": 0, 00:23:16.384 "data_size": 7936 00:23:16.384 }, 00:23:16.384 { 00:23:16.384 "name": "BaseBdev2", 00:23:16.384 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:16.384 "is_configured": true, 00:23:16.384 "data_offset": 256, 00:23:16.384 "data_size": 7936 00:23:16.384 } 00:23:16.384 ] 00:23:16.384 }' 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:16.384 10:51:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.952 10:51:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:16.952 10:51:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.952 10:51:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:16.952 [2024-10-30 10:51:38.213335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:16.952 [2024-10-30 10:51:38.213594] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:16.952 [2024-10-30 10:51:38.213620] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:16.952 [2024-10-30 10:51:38.213685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:16.952 [2024-10-30 10:51:38.227038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:23:16.952 10:51:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.952 10:51:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:16.952 [2024-10-30 10:51:38.229584] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.889 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:17.889 "name": "raid_bdev1", 00:23:17.889 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:17.889 "strip_size_kb": 0, 00:23:17.889 "state": "online", 00:23:17.889 "raid_level": "raid1", 00:23:17.889 "superblock": true, 00:23:17.889 "num_base_bdevs": 2, 00:23:17.889 "num_base_bdevs_discovered": 2, 00:23:17.889 "num_base_bdevs_operational": 2, 00:23:17.889 "process": { 00:23:17.890 "type": "rebuild", 00:23:17.890 "target": "spare", 00:23:17.890 "progress": { 00:23:17.890 "blocks": 2560, 00:23:17.890 "percent": 32 00:23:17.890 } 00:23:17.890 }, 00:23:17.890 "base_bdevs_list": [ 00:23:17.890 { 00:23:17.890 "name": "spare", 00:23:17.890 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:17.890 "is_configured": true, 00:23:17.890 "data_offset": 256, 00:23:17.890 "data_size": 7936 00:23:17.890 }, 00:23:17.890 { 00:23:17.890 "name": "BaseBdev2", 00:23:17.890 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:17.890 "is_configured": true, 00:23:17.890 "data_offset": 256, 00:23:17.890 "data_size": 7936 00:23:17.890 } 00:23:17.890 ] 00:23:17.890 }' 00:23:17.890 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:17.890 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.890 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:18.149 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.149 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.150 [2024-10-30 10:51:39.400406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:18.150 [2024-10-30 10:51:39.439862] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:18.150 [2024-10-30 10:51:39.439943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.150 [2024-10-30 10:51:39.439992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:18.150 [2024-10-30 10:51:39.440025] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.150 "name": "raid_bdev1", 00:23:18.150 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:18.150 "strip_size_kb": 0, 00:23:18.150 "state": "online", 00:23:18.150 "raid_level": "raid1", 00:23:18.150 "superblock": true, 00:23:18.150 "num_base_bdevs": 2, 00:23:18.150 "num_base_bdevs_discovered": 1, 00:23:18.150 "num_base_bdevs_operational": 1, 00:23:18.150 "base_bdevs_list": [ 00:23:18.150 { 00:23:18.150 "name": null, 00:23:18.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.150 "is_configured": false, 00:23:18.150 "data_offset": 0, 00:23:18.150 "data_size": 7936 00:23:18.150 }, 00:23:18.150 { 00:23:18.150 "name": "BaseBdev2", 00:23:18.150 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:18.150 "is_configured": true, 00:23:18.150 "data_offset": 256, 00:23:18.150 "data_size": 7936 00:23:18.150 } 00:23:18.150 ] 00:23:18.150 }' 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.150 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.718 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:18.718 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.718 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:18.718 [2024-10-30 10:51:39.975515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:18.718 [2024-10-30 10:51:39.975674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.718 [2024-10-30 10:51:39.975712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:18.718 [2024-10-30 10:51:39.975731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.718 [2024-10-30 10:51:39.976123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.718 [2024-10-30 10:51:39.976186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:18.718 [2024-10-30 10:51:39.976274] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:18.718 [2024-10-30 10:51:39.976301] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:18.718 [2024-10-30 10:51:39.976317] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:18.718 [2024-10-30 10:51:39.976350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:18.718 [2024-10-30 10:51:39.989122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:23:18.718 spare 00:23:18.718 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.718 10:51:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:18.718 [2024-10-30 10:51:39.991881] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:19.654 10:51:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.654 10:51:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:19.654 10:51:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:19.654 10:51:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:19.654 10:51:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:19.654 10:51:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.654 10:51:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.654 10:51:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.654 10:51:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.654 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.654 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:19.654 "name": "raid_bdev1", 00:23:19.654 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:19.654 "strip_size_kb": 0, 00:23:19.654 "state": "online", 00:23:19.654 "raid_level": "raid1", 00:23:19.654 "superblock": true, 00:23:19.654 "num_base_bdevs": 2, 00:23:19.654 "num_base_bdevs_discovered": 2, 00:23:19.654 "num_base_bdevs_operational": 2, 00:23:19.654 "process": { 00:23:19.654 "type": "rebuild", 00:23:19.654 "target": "spare", 00:23:19.654 "progress": { 00:23:19.654 "blocks": 2560, 00:23:19.654 "percent": 32 00:23:19.654 } 00:23:19.654 }, 00:23:19.654 "base_bdevs_list": [ 00:23:19.654 { 00:23:19.654 "name": "spare", 00:23:19.654 "uuid": "55d1fa73-1668-5745-99f6-25aab2563649", 00:23:19.654 "is_configured": true, 00:23:19.654 "data_offset": 256, 00:23:19.654 "data_size": 7936 00:23:19.654 }, 00:23:19.654 { 00:23:19.654 "name": "BaseBdev2", 00:23:19.654 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:19.654 "is_configured": true, 00:23:19.654 "data_offset": 256, 00:23:19.654 "data_size": 7936 00:23:19.654 } 00:23:19.654 ] 00:23:19.654 }' 00:23:19.654 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:19.654 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:19.654 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.913 [2024-10-30 10:51:41.165865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:19.913 [2024-10-30 10:51:41.203682] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:19.913 [2024-10-30 10:51:41.203771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.913 [2024-10-30 10:51:41.203800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:19.913 [2024-10-30 10:51:41.203811] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.913 "name": "raid_bdev1", 00:23:19.913 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:19.913 "strip_size_kb": 0, 00:23:19.913 "state": "online", 00:23:19.913 "raid_level": "raid1", 00:23:19.913 "superblock": true, 00:23:19.913 "num_base_bdevs": 2, 00:23:19.913 "num_base_bdevs_discovered": 1, 00:23:19.913 "num_base_bdevs_operational": 1, 00:23:19.913 "base_bdevs_list": [ 00:23:19.913 { 00:23:19.913 "name": null, 00:23:19.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.913 "is_configured": false, 00:23:19.913 "data_offset": 0, 00:23:19.913 "data_size": 7936 00:23:19.913 }, 00:23:19.913 { 00:23:19.913 "name": "BaseBdev2", 00:23:19.913 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:19.913 "is_configured": true, 00:23:19.913 "data_offset": 256, 00:23:19.913 "data_size": 7936 00:23:19.913 } 00:23:19.913 ] 00:23:19.913 }' 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.913 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.481 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:20.481 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:20.481 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:20.481 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:20.481 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:20.481 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.481 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.481 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:20.482 "name": "raid_bdev1", 00:23:20.482 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:20.482 "strip_size_kb": 0, 00:23:20.482 "state": "online", 00:23:20.482 "raid_level": "raid1", 00:23:20.482 "superblock": true, 00:23:20.482 "num_base_bdevs": 2, 00:23:20.482 "num_base_bdevs_discovered": 1, 00:23:20.482 "num_base_bdevs_operational": 1, 00:23:20.482 "base_bdevs_list": [ 00:23:20.482 { 00:23:20.482 "name": null, 00:23:20.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.482 "is_configured": false, 00:23:20.482 "data_offset": 0, 00:23:20.482 "data_size": 7936 00:23:20.482 }, 00:23:20.482 { 00:23:20.482 "name": "BaseBdev2", 00:23:20.482 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:20.482 "is_configured": true, 00:23:20.482 "data_offset": 256, 00:23:20.482 "data_size": 7936 00:23:20.482 } 00:23:20.482 ] 00:23:20.482 }' 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:20.482 [2024-10-30 10:51:41.914475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:20.482 [2024-10-30 10:51:41.914638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.482 [2024-10-30 10:51:41.914673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:20.482 [2024-10-30 10:51:41.914688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.482 [2024-10-30 10:51:41.915002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.482 [2024-10-30 10:51:41.915024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:20.482 [2024-10-30 10:51:41.915096] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:20.482 [2024-10-30 10:51:41.915117] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:20.482 [2024-10-30 10:51:41.915145] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:20.482 [2024-10-30 10:51:41.915190] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:20.482 BaseBdev1 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.482 10:51:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:21.858 "name": "raid_bdev1", 00:23:21.858 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:21.858 "strip_size_kb": 0, 00:23:21.858 "state": "online", 00:23:21.858 "raid_level": "raid1", 00:23:21.858 "superblock": true, 00:23:21.858 "num_base_bdevs": 2, 00:23:21.858 "num_base_bdevs_discovered": 1, 00:23:21.858 "num_base_bdevs_operational": 1, 00:23:21.858 "base_bdevs_list": [ 00:23:21.858 { 00:23:21.858 "name": null, 00:23:21.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.858 "is_configured": false, 00:23:21.858 "data_offset": 0, 00:23:21.858 "data_size": 7936 00:23:21.858 }, 00:23:21.858 { 00:23:21.858 "name": "BaseBdev2", 00:23:21.858 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:21.858 "is_configured": true, 00:23:21.858 "data_offset": 256, 00:23:21.858 "data_size": 7936 00:23:21.858 } 00:23:21.858 ] 00:23:21.858 }' 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:21.858 10:51:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:22.118 "name": "raid_bdev1", 00:23:22.118 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:22.118 "strip_size_kb": 0, 00:23:22.118 "state": "online", 00:23:22.118 "raid_level": "raid1", 00:23:22.118 "superblock": true, 00:23:22.118 "num_base_bdevs": 2, 00:23:22.118 "num_base_bdevs_discovered": 1, 00:23:22.118 "num_base_bdevs_operational": 1, 00:23:22.118 "base_bdevs_list": [ 00:23:22.118 { 00:23:22.118 "name": null, 00:23:22.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.118 "is_configured": false, 00:23:22.118 "data_offset": 0, 00:23:22.118 "data_size": 7936 00:23:22.118 }, 00:23:22.118 { 00:23:22.118 "name": "BaseBdev2", 00:23:22.118 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:22.118 "is_configured": true, 00:23:22.118 "data_offset": 256, 00:23:22.118 "data_size": 7936 00:23:22.118 } 00:23:22.118 ] 00:23:22.118 }' 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:22.118 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:22.377 [2024-10-30 10:51:43.619032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.377 [2024-10-30 10:51:43.619291] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:22.377 [2024-10-30 10:51:43.619326] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:22.377 request: 00:23:22.377 { 00:23:22.377 "base_bdev": "BaseBdev1", 00:23:22.377 "raid_bdev": "raid_bdev1", 00:23:22.377 "method": "bdev_raid_add_base_bdev", 00:23:22.377 "req_id": 1 00:23:22.377 } 00:23:22.377 Got JSON-RPC error response 00:23:22.377 response: 00:23:22.377 { 00:23:22.377 "code": -22, 00:23:22.377 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:22.377 } 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.377 10:51:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.312 "name": "raid_bdev1", 00:23:23.312 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:23.312 "strip_size_kb": 0, 00:23:23.312 "state": "online", 00:23:23.312 "raid_level": "raid1", 00:23:23.312 "superblock": true, 00:23:23.312 "num_base_bdevs": 2, 00:23:23.312 "num_base_bdevs_discovered": 1, 00:23:23.312 "num_base_bdevs_operational": 1, 00:23:23.312 "base_bdevs_list": [ 00:23:23.312 { 00:23:23.312 "name": null, 00:23:23.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.312 "is_configured": false, 00:23:23.312 "data_offset": 0, 00:23:23.312 "data_size": 7936 00:23:23.312 }, 00:23:23.312 { 00:23:23.312 "name": "BaseBdev2", 00:23:23.312 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:23.312 "is_configured": true, 00:23:23.312 "data_offset": 256, 00:23:23.312 "data_size": 7936 00:23:23.312 } 00:23:23.312 ] 00:23:23.312 }' 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.312 10:51:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.879 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:23.879 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:23.879 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:23.879 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:23.879 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:23.879 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:23.880 "name": "raid_bdev1", 00:23:23.880 "uuid": "491f0edc-89da-4f3f-9b2d-4c461981552f", 00:23:23.880 "strip_size_kb": 0, 00:23:23.880 "state": "online", 00:23:23.880 "raid_level": "raid1", 00:23:23.880 "superblock": true, 00:23:23.880 "num_base_bdevs": 2, 00:23:23.880 "num_base_bdevs_discovered": 1, 00:23:23.880 "num_base_bdevs_operational": 1, 00:23:23.880 "base_bdevs_list": [ 00:23:23.880 { 00:23:23.880 "name": null, 00:23:23.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.880 "is_configured": false, 00:23:23.880 "data_offset": 0, 00:23:23.880 "data_size": 7936 00:23:23.880 }, 00:23:23.880 { 00:23:23.880 "name": "BaseBdev2", 00:23:23.880 "uuid": "040278be-da6d-5c6e-995f-1e0cbed4c556", 00:23:23.880 "is_configured": true, 00:23:23.880 "data_offset": 256, 00:23:23.880 "data_size": 7936 00:23:23.880 } 00:23:23.880 ] 00:23:23.880 }' 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88377 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88377 ']' 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88377 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:23.880 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88377 00:23:24.138 killing process with pid 88377 00:23:24.138 Received shutdown signal, test time was about 60.000000 seconds 00:23:24.138 00:23:24.138 Latency(us) 00:23:24.138 [2024-10-30T10:51:45.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.138 [2024-10-30T10:51:45.608Z] =================================================================================================================== 00:23:24.138 [2024-10-30T10:51:45.608Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:24.138 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:24.138 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:24.138 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88377' 00:23:24.138 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88377 00:23:24.138 [2024-10-30 10:51:45.362572] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:24.138 10:51:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88377 00:23:24.138 [2024-10-30 10:51:45.362728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.138 [2024-10-30 10:51:45.362794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.138 [2024-10-30 10:51:45.362813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:24.397 [2024-10-30 10:51:45.658139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:25.332 10:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:23:25.332 00:23:25.332 real 0m21.826s 00:23:25.332 user 0m29.564s 00:23:25.332 sys 0m2.703s 00:23:25.332 10:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:25.332 ************************************ 00:23:25.332 END TEST raid_rebuild_test_sb_md_separate 00:23:25.332 ************************************ 00:23:25.332 10:51:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:25.332 10:51:46 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:23:25.332 10:51:46 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:23:25.332 10:51:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:25.332 10:51:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:25.332 10:51:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:25.332 ************************************ 00:23:25.332 START TEST raid_state_function_test_sb_md_interleaved 00:23:25.332 ************************************ 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:25.332 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89088 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89088' 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:25.333 Process raid pid: 89088 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89088 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89088 ']' 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:25.333 10:51:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:25.590 [2024-10-30 10:51:46.845540] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:23:25.590 [2024-10-30 10:51:46.846076] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.590 [2024-10-30 10:51:47.021316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.848 [2024-10-30 10:51:47.152322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.106 [2024-10-30 10:51:47.361663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.106 [2024-10-30 10:51:47.361706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:26.673 [2024-10-30 10:51:47.866411] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:26.673 [2024-10-30 10:51:47.866672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:26.673 [2024-10-30 10:51:47.866703] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:26.673 [2024-10-30 10:51:47.866727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.673 "name": "Existed_Raid", 00:23:26.673 "uuid": "b442b612-f2fa-4103-958f-adeee1fddb4e", 00:23:26.673 "strip_size_kb": 0, 00:23:26.673 "state": "configuring", 00:23:26.673 "raid_level": "raid1", 00:23:26.673 "superblock": true, 00:23:26.673 "num_base_bdevs": 2, 00:23:26.673 "num_base_bdevs_discovered": 0, 00:23:26.673 "num_base_bdevs_operational": 2, 00:23:26.673 "base_bdevs_list": [ 00:23:26.673 { 00:23:26.673 "name": "BaseBdev1", 00:23:26.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.673 "is_configured": false, 00:23:26.673 "data_offset": 0, 00:23:26.673 "data_size": 0 00:23:26.673 }, 00:23:26.673 { 00:23:26.673 "name": "BaseBdev2", 00:23:26.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.673 "is_configured": false, 00:23:26.673 "data_offset": 0, 00:23:26.673 "data_size": 0 00:23:26.673 } 00:23:26.673 ] 00:23:26.673 }' 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.673 10:51:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:26.932 [2024-10-30 10:51:48.386452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:26.932 [2024-10-30 10:51:48.386492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:26.932 [2024-10-30 10:51:48.394447] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:26.932 [2024-10-30 10:51:48.394514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:26.932 [2024-10-30 10:51:48.394530] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:26.932 [2024-10-30 10:51:48.394564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.932 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.191 [2024-10-30 10:51:48.441849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.191 BaseBdev1 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.191 [ 00:23:27.191 { 00:23:27.191 "name": "BaseBdev1", 00:23:27.191 "aliases": [ 00:23:27.191 "af2f1b2f-d156-4c4f-9431-f99f4f7db6bf" 00:23:27.191 ], 00:23:27.191 "product_name": "Malloc disk", 00:23:27.191 "block_size": 4128, 00:23:27.191 "num_blocks": 8192, 00:23:27.191 "uuid": "af2f1b2f-d156-4c4f-9431-f99f4f7db6bf", 00:23:27.191 "md_size": 32, 00:23:27.191 "md_interleave": true, 00:23:27.191 "dif_type": 0, 00:23:27.191 "assigned_rate_limits": { 00:23:27.191 "rw_ios_per_sec": 0, 00:23:27.191 "rw_mbytes_per_sec": 0, 00:23:27.191 "r_mbytes_per_sec": 0, 00:23:27.191 "w_mbytes_per_sec": 0 00:23:27.191 }, 00:23:27.191 "claimed": true, 00:23:27.191 "claim_type": "exclusive_write", 00:23:27.191 "zoned": false, 00:23:27.191 "supported_io_types": { 00:23:27.191 "read": true, 00:23:27.191 "write": true, 00:23:27.191 "unmap": true, 00:23:27.191 "flush": true, 00:23:27.191 "reset": true, 00:23:27.191 "nvme_admin": false, 00:23:27.191 "nvme_io": false, 00:23:27.191 "nvme_io_md": false, 00:23:27.191 "write_zeroes": true, 00:23:27.191 "zcopy": true, 00:23:27.191 "get_zone_info": false, 00:23:27.191 "zone_management": false, 00:23:27.191 "zone_append": false, 00:23:27.191 "compare": false, 00:23:27.191 "compare_and_write": false, 00:23:27.191 "abort": true, 00:23:27.191 "seek_hole": false, 00:23:27.191 "seek_data": false, 00:23:27.191 "copy": true, 00:23:27.191 "nvme_iov_md": false 00:23:27.191 }, 00:23:27.191 "memory_domains": [ 00:23:27.191 { 00:23:27.191 "dma_device_id": "system", 00:23:27.191 "dma_device_type": 1 00:23:27.191 }, 00:23:27.191 { 00:23:27.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.191 "dma_device_type": 2 00:23:27.191 } 00:23:27.191 ], 00:23:27.191 "driver_specific": {} 00:23:27.191 } 00:23:27.191 ] 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.191 "name": "Existed_Raid", 00:23:27.191 "uuid": "1e86368d-bbac-4632-8090-e77d55e78d4e", 00:23:27.191 "strip_size_kb": 0, 00:23:27.191 "state": "configuring", 00:23:27.191 "raid_level": "raid1", 00:23:27.191 "superblock": true, 00:23:27.191 "num_base_bdevs": 2, 00:23:27.191 "num_base_bdevs_discovered": 1, 00:23:27.191 "num_base_bdevs_operational": 2, 00:23:27.191 "base_bdevs_list": [ 00:23:27.191 { 00:23:27.191 "name": "BaseBdev1", 00:23:27.191 "uuid": "af2f1b2f-d156-4c4f-9431-f99f4f7db6bf", 00:23:27.191 "is_configured": true, 00:23:27.191 "data_offset": 256, 00:23:27.191 "data_size": 7936 00:23:27.191 }, 00:23:27.191 { 00:23:27.191 "name": "BaseBdev2", 00:23:27.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.191 "is_configured": false, 00:23:27.191 "data_offset": 0, 00:23:27.191 "data_size": 0 00:23:27.191 } 00:23:27.191 ] 00:23:27.191 }' 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.191 10:51:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.761 [2024-10-30 10:51:49.022225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:27.761 [2024-10-30 10:51:49.022286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.761 [2024-10-30 10:51:49.030343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:27.761 [2024-10-30 10:51:49.032891] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:27.761 [2024-10-30 10:51:49.032958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.761 "name": "Existed_Raid", 00:23:27.761 "uuid": "0e402131-91d9-4f23-b7c2-5d3d9dee4489", 00:23:27.761 "strip_size_kb": 0, 00:23:27.761 "state": "configuring", 00:23:27.761 "raid_level": "raid1", 00:23:27.761 "superblock": true, 00:23:27.761 "num_base_bdevs": 2, 00:23:27.761 "num_base_bdevs_discovered": 1, 00:23:27.761 "num_base_bdevs_operational": 2, 00:23:27.761 "base_bdevs_list": [ 00:23:27.761 { 00:23:27.761 "name": "BaseBdev1", 00:23:27.761 "uuid": "af2f1b2f-d156-4c4f-9431-f99f4f7db6bf", 00:23:27.761 "is_configured": true, 00:23:27.761 "data_offset": 256, 00:23:27.761 "data_size": 7936 00:23:27.761 }, 00:23:27.761 { 00:23:27.761 "name": "BaseBdev2", 00:23:27.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.761 "is_configured": false, 00:23:27.761 "data_offset": 0, 00:23:27.761 "data_size": 0 00:23:27.761 } 00:23:27.761 ] 00:23:27.761 }' 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.761 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.328 [2024-10-30 10:51:49.569880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.328 [2024-10-30 10:51:49.570467] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:28.328 [2024-10-30 10:51:49.570667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:28.328 [2024-10-30 10:51:49.570856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:28.328 BaseBdev2 00:23:28.328 [2024-10-30 10:51:49.571176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:28.328 [2024-10-30 10:51:49.571335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.328 [2024-10-30 10:51:49.571646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.328 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.328 [ 00:23:28.328 { 00:23:28.328 "name": "BaseBdev2", 00:23:28.328 "aliases": [ 00:23:28.328 "61536455-6bcc-4763-ad72-c532b261f8c7" 00:23:28.328 ], 00:23:28.328 "product_name": "Malloc disk", 00:23:28.328 "block_size": 4128, 00:23:28.328 "num_blocks": 8192, 00:23:28.328 "uuid": "61536455-6bcc-4763-ad72-c532b261f8c7", 00:23:28.328 "md_size": 32, 00:23:28.328 "md_interleave": true, 00:23:28.328 "dif_type": 0, 00:23:28.328 "assigned_rate_limits": { 00:23:28.328 "rw_ios_per_sec": 0, 00:23:28.328 "rw_mbytes_per_sec": 0, 00:23:28.328 "r_mbytes_per_sec": 0, 00:23:28.328 "w_mbytes_per_sec": 0 00:23:28.328 }, 00:23:28.328 "claimed": true, 00:23:28.328 "claim_type": "exclusive_write", 00:23:28.328 "zoned": false, 00:23:28.328 "supported_io_types": { 00:23:28.328 "read": true, 00:23:28.328 "write": true, 00:23:28.328 "unmap": true, 00:23:28.328 "flush": true, 00:23:28.328 "reset": true, 00:23:28.328 "nvme_admin": false, 00:23:28.328 "nvme_io": false, 00:23:28.328 "nvme_io_md": false, 00:23:28.328 "write_zeroes": true, 00:23:28.328 "zcopy": true, 00:23:28.328 "get_zone_info": false, 00:23:28.328 "zone_management": false, 00:23:28.328 "zone_append": false, 00:23:28.328 "compare": false, 00:23:28.328 "compare_and_write": false, 00:23:28.328 "abort": true, 00:23:28.328 "seek_hole": false, 00:23:28.328 "seek_data": false, 00:23:28.328 "copy": true, 00:23:28.328 "nvme_iov_md": false 00:23:28.328 }, 00:23:28.328 "memory_domains": [ 00:23:28.328 { 00:23:28.328 "dma_device_id": "system", 00:23:28.328 "dma_device_type": 1 00:23:28.328 }, 00:23:28.328 { 00:23:28.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.328 "dma_device_type": 2 00:23:28.328 } 00:23:28.328 ], 00:23:28.328 "driver_specific": {} 00:23:28.328 } 00:23:28.328 ] 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.329 "name": "Existed_Raid", 00:23:28.329 "uuid": "0e402131-91d9-4f23-b7c2-5d3d9dee4489", 00:23:28.329 "strip_size_kb": 0, 00:23:28.329 "state": "online", 00:23:28.329 "raid_level": "raid1", 00:23:28.329 "superblock": true, 00:23:28.329 "num_base_bdevs": 2, 00:23:28.329 "num_base_bdevs_discovered": 2, 00:23:28.329 "num_base_bdevs_operational": 2, 00:23:28.329 "base_bdevs_list": [ 00:23:28.329 { 00:23:28.329 "name": "BaseBdev1", 00:23:28.329 "uuid": "af2f1b2f-d156-4c4f-9431-f99f4f7db6bf", 00:23:28.329 "is_configured": true, 00:23:28.329 "data_offset": 256, 00:23:28.329 "data_size": 7936 00:23:28.329 }, 00:23:28.329 { 00:23:28.329 "name": "BaseBdev2", 00:23:28.329 "uuid": "61536455-6bcc-4763-ad72-c532b261f8c7", 00:23:28.329 "is_configured": true, 00:23:28.329 "data_offset": 256, 00:23:28.329 "data_size": 7936 00:23:28.329 } 00:23:28.329 ] 00:23:28.329 }' 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.329 10:51:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.896 [2024-10-30 10:51:50.118612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:28.896 "name": "Existed_Raid", 00:23:28.896 "aliases": [ 00:23:28.896 "0e402131-91d9-4f23-b7c2-5d3d9dee4489" 00:23:28.896 ], 00:23:28.896 "product_name": "Raid Volume", 00:23:28.896 "block_size": 4128, 00:23:28.896 "num_blocks": 7936, 00:23:28.896 "uuid": "0e402131-91d9-4f23-b7c2-5d3d9dee4489", 00:23:28.896 "md_size": 32, 00:23:28.896 "md_interleave": true, 00:23:28.896 "dif_type": 0, 00:23:28.896 "assigned_rate_limits": { 00:23:28.896 "rw_ios_per_sec": 0, 00:23:28.896 "rw_mbytes_per_sec": 0, 00:23:28.896 "r_mbytes_per_sec": 0, 00:23:28.896 "w_mbytes_per_sec": 0 00:23:28.896 }, 00:23:28.896 "claimed": false, 00:23:28.896 "zoned": false, 00:23:28.896 "supported_io_types": { 00:23:28.896 "read": true, 00:23:28.896 "write": true, 00:23:28.896 "unmap": false, 00:23:28.896 "flush": false, 00:23:28.896 "reset": true, 00:23:28.896 "nvme_admin": false, 00:23:28.896 "nvme_io": false, 00:23:28.896 "nvme_io_md": false, 00:23:28.896 "write_zeroes": true, 00:23:28.896 "zcopy": false, 00:23:28.896 "get_zone_info": false, 00:23:28.896 "zone_management": false, 00:23:28.896 "zone_append": false, 00:23:28.896 "compare": false, 00:23:28.896 "compare_and_write": false, 00:23:28.896 "abort": false, 00:23:28.896 "seek_hole": false, 00:23:28.896 "seek_data": false, 00:23:28.896 "copy": false, 00:23:28.896 "nvme_iov_md": false 00:23:28.896 }, 00:23:28.896 "memory_domains": [ 00:23:28.896 { 00:23:28.896 "dma_device_id": "system", 00:23:28.896 "dma_device_type": 1 00:23:28.896 }, 00:23:28.896 { 00:23:28.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.896 "dma_device_type": 2 00:23:28.896 }, 00:23:28.896 { 00:23:28.896 "dma_device_id": "system", 00:23:28.896 "dma_device_type": 1 00:23:28.896 }, 00:23:28.896 { 00:23:28.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.896 "dma_device_type": 2 00:23:28.896 } 00:23:28.896 ], 00:23:28.896 "driver_specific": { 00:23:28.896 "raid": { 00:23:28.896 "uuid": "0e402131-91d9-4f23-b7c2-5d3d9dee4489", 00:23:28.896 "strip_size_kb": 0, 00:23:28.896 "state": "online", 00:23:28.896 "raid_level": "raid1", 00:23:28.896 "superblock": true, 00:23:28.896 "num_base_bdevs": 2, 00:23:28.896 "num_base_bdevs_discovered": 2, 00:23:28.896 "num_base_bdevs_operational": 2, 00:23:28.896 "base_bdevs_list": [ 00:23:28.896 { 00:23:28.896 "name": "BaseBdev1", 00:23:28.896 "uuid": "af2f1b2f-d156-4c4f-9431-f99f4f7db6bf", 00:23:28.896 "is_configured": true, 00:23:28.896 "data_offset": 256, 00:23:28.896 "data_size": 7936 00:23:28.896 }, 00:23:28.896 { 00:23:28.896 "name": "BaseBdev2", 00:23:28.896 "uuid": "61536455-6bcc-4763-ad72-c532b261f8c7", 00:23:28.896 "is_configured": true, 00:23:28.896 "data_offset": 256, 00:23:28.896 "data_size": 7936 00:23:28.896 } 00:23:28.896 ] 00:23:28.896 } 00:23:28.896 } 00:23:28.896 }' 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:28.896 BaseBdev2' 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.896 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.897 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.897 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:28.897 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:28.897 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:28.897 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:28.897 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.897 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:28.897 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:28.897 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:29.155 [2024-10-30 10:51:50.398365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.155 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.155 "name": "Existed_Raid", 00:23:29.155 "uuid": "0e402131-91d9-4f23-b7c2-5d3d9dee4489", 00:23:29.155 "strip_size_kb": 0, 00:23:29.155 "state": "online", 00:23:29.155 "raid_level": "raid1", 00:23:29.155 "superblock": true, 00:23:29.155 "num_base_bdevs": 2, 00:23:29.155 "num_base_bdevs_discovered": 1, 00:23:29.155 "num_base_bdevs_operational": 1, 00:23:29.155 "base_bdevs_list": [ 00:23:29.155 { 00:23:29.155 "name": null, 00:23:29.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.155 "is_configured": false, 00:23:29.155 "data_offset": 0, 00:23:29.155 "data_size": 7936 00:23:29.155 }, 00:23:29.155 { 00:23:29.155 "name": "BaseBdev2", 00:23:29.155 "uuid": "61536455-6bcc-4763-ad72-c532b261f8c7", 00:23:29.155 "is_configured": true, 00:23:29.155 "data_offset": 256, 00:23:29.155 "data_size": 7936 00:23:29.156 } 00:23:29.156 ] 00:23:29.156 }' 00:23:29.156 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.156 10:51:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:29.723 [2024-10-30 10:51:51.088741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:29.723 [2024-10-30 10:51:51.088876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:29.723 [2024-10-30 10:51:51.175627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:29.723 [2024-10-30 10:51:51.175892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:29.723 [2024-10-30 10:51:51.176078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:29.723 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89088 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89088 ']' 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89088 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89088 00:23:29.982 killing process with pid 89088 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89088' 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89088 00:23:29.982 [2024-10-30 10:51:51.266289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:29.982 10:51:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89088 00:23:29.982 [2024-10-30 10:51:51.281463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:30.919 ************************************ 00:23:30.919 END TEST raid_state_function_test_sb_md_interleaved 00:23:30.919 ************************************ 00:23:30.919 10:51:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:23:30.919 00:23:30.919 real 0m5.572s 00:23:30.919 user 0m8.428s 00:23:30.919 sys 0m0.807s 00:23:30.919 10:51:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:30.919 10:51:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:30.919 10:51:52 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:23:30.919 10:51:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:30.919 10:51:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:30.919 10:51:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:30.919 ************************************ 00:23:30.919 START TEST raid_superblock_test_md_interleaved 00:23:30.919 ************************************ 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89340 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89340 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89340 ']' 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:30.919 10:51:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:31.181 [2024-10-30 10:51:52.465940] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:23:31.181 [2024-10-30 10:51:52.466369] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89340 ] 00:23:31.181 [2024-10-30 10:51:52.642480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.442 [2024-10-30 10:51:52.771746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.701 [2024-10-30 10:51:52.978356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:31.701 [2024-10-30 10:51:52.978588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:32.268 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.269 malloc1 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.269 [2024-10-30 10:51:53.545461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:32.269 [2024-10-30 10:51:53.545667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.269 [2024-10-30 10:51:53.545748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:32.269 [2024-10-30 10:51:53.545886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.269 [2024-10-30 10:51:53.548475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.269 [2024-10-30 10:51:53.548667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:32.269 pt1 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.269 malloc2 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.269 [2024-10-30 10:51:53.603971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:32.269 [2024-10-30 10:51:53.604203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.269 [2024-10-30 10:51:53.604247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:32.269 [2024-10-30 10:51:53.604263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.269 [2024-10-30 10:51:53.606840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.269 [2024-10-30 10:51:53.606884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:32.269 pt2 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.269 [2024-10-30 10:51:53.616098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:32.269 [2024-10-30 10:51:53.618562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:32.269 [2024-10-30 10:51:53.618818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:32.269 [2024-10-30 10:51:53.618839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:32.269 [2024-10-30 10:51:53.618947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:32.269 [2024-10-30 10:51:53.619093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:32.269 [2024-10-30 10:51:53.619114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:32.269 [2024-10-30 10:51:53.619223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.269 "name": "raid_bdev1", 00:23:32.269 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:32.269 "strip_size_kb": 0, 00:23:32.269 "state": "online", 00:23:32.269 "raid_level": "raid1", 00:23:32.269 "superblock": true, 00:23:32.269 "num_base_bdevs": 2, 00:23:32.269 "num_base_bdevs_discovered": 2, 00:23:32.269 "num_base_bdevs_operational": 2, 00:23:32.269 "base_bdevs_list": [ 00:23:32.269 { 00:23:32.269 "name": "pt1", 00:23:32.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:32.269 "is_configured": true, 00:23:32.269 "data_offset": 256, 00:23:32.269 "data_size": 7936 00:23:32.269 }, 00:23:32.269 { 00:23:32.269 "name": "pt2", 00:23:32.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:32.269 "is_configured": true, 00:23:32.269 "data_offset": 256, 00:23:32.269 "data_size": 7936 00:23:32.269 } 00:23:32.269 ] 00:23:32.269 }' 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.269 10:51:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.838 [2024-10-30 10:51:54.128653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.838 "name": "raid_bdev1", 00:23:32.838 "aliases": [ 00:23:32.838 "cd85ac4a-6897-4675-954e-cba1390e5c2b" 00:23:32.838 ], 00:23:32.838 "product_name": "Raid Volume", 00:23:32.838 "block_size": 4128, 00:23:32.838 "num_blocks": 7936, 00:23:32.838 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:32.838 "md_size": 32, 00:23:32.838 "md_interleave": true, 00:23:32.838 "dif_type": 0, 00:23:32.838 "assigned_rate_limits": { 00:23:32.838 "rw_ios_per_sec": 0, 00:23:32.838 "rw_mbytes_per_sec": 0, 00:23:32.838 "r_mbytes_per_sec": 0, 00:23:32.838 "w_mbytes_per_sec": 0 00:23:32.838 }, 00:23:32.838 "claimed": false, 00:23:32.838 "zoned": false, 00:23:32.838 "supported_io_types": { 00:23:32.838 "read": true, 00:23:32.838 "write": true, 00:23:32.838 "unmap": false, 00:23:32.838 "flush": false, 00:23:32.838 "reset": true, 00:23:32.838 "nvme_admin": false, 00:23:32.838 "nvme_io": false, 00:23:32.838 "nvme_io_md": false, 00:23:32.838 "write_zeroes": true, 00:23:32.838 "zcopy": false, 00:23:32.838 "get_zone_info": false, 00:23:32.838 "zone_management": false, 00:23:32.838 "zone_append": false, 00:23:32.838 "compare": false, 00:23:32.838 "compare_and_write": false, 00:23:32.838 "abort": false, 00:23:32.838 "seek_hole": false, 00:23:32.838 "seek_data": false, 00:23:32.838 "copy": false, 00:23:32.838 "nvme_iov_md": false 00:23:32.838 }, 00:23:32.838 "memory_domains": [ 00:23:32.838 { 00:23:32.838 "dma_device_id": "system", 00:23:32.838 "dma_device_type": 1 00:23:32.838 }, 00:23:32.838 { 00:23:32.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.838 "dma_device_type": 2 00:23:32.838 }, 00:23:32.838 { 00:23:32.838 "dma_device_id": "system", 00:23:32.838 "dma_device_type": 1 00:23:32.838 }, 00:23:32.838 { 00:23:32.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.838 "dma_device_type": 2 00:23:32.838 } 00:23:32.838 ], 00:23:32.838 "driver_specific": { 00:23:32.838 "raid": { 00:23:32.838 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:32.838 "strip_size_kb": 0, 00:23:32.838 "state": "online", 00:23:32.838 "raid_level": "raid1", 00:23:32.838 "superblock": true, 00:23:32.838 "num_base_bdevs": 2, 00:23:32.838 "num_base_bdevs_discovered": 2, 00:23:32.838 "num_base_bdevs_operational": 2, 00:23:32.838 "base_bdevs_list": [ 00:23:32.838 { 00:23:32.838 "name": "pt1", 00:23:32.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:32.838 "is_configured": true, 00:23:32.838 "data_offset": 256, 00:23:32.838 "data_size": 7936 00:23:32.838 }, 00:23:32.838 { 00:23:32.838 "name": "pt2", 00:23:32.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:32.838 "is_configured": true, 00:23:32.838 "data_offset": 256, 00:23:32.838 "data_size": 7936 00:23:32.838 } 00:23:32.838 ] 00:23:32.838 } 00:23:32.838 } 00:23:32.838 }' 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:32.838 pt2' 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.838 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.098 [2024-10-30 10:51:54.388649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cd85ac4a-6897-4675-954e-cba1390e5c2b 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z cd85ac4a-6897-4675-954e-cba1390e5c2b ']' 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.098 [2024-10-30 10:51:54.436263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:33.098 [2024-10-30 10:51:54.436290] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:33.098 [2024-10-30 10:51:54.436403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:33.098 [2024-10-30 10:51:54.436491] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:33.098 [2024-10-30 10:51:54.436510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:33.098 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.099 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.099 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.099 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:33.099 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.099 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:33.099 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.099 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.358 [2024-10-30 10:51:54.580326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:33.358 [2024-10-30 10:51:54.582771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:33.358 [2024-10-30 10:51:54.582866] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:33.358 [2024-10-30 10:51:54.582960] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:33.358 [2024-10-30 10:51:54.583019] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:33.358 [2024-10-30 10:51:54.583037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:33.358 request: 00:23:33.358 { 00:23:33.358 "name": "raid_bdev1", 00:23:33.358 "raid_level": "raid1", 00:23:33.358 "base_bdevs": [ 00:23:33.358 "malloc1", 00:23:33.358 "malloc2" 00:23:33.358 ], 00:23:33.358 "superblock": false, 00:23:33.358 "method": "bdev_raid_create", 00:23:33.358 "req_id": 1 00:23:33.358 } 00:23:33.358 Got JSON-RPC error response 00:23:33.358 response: 00:23:33.358 { 00:23:33.358 "code": -17, 00:23:33.358 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:33.358 } 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.358 [2024-10-30 10:51:54.640321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:33.358 [2024-10-30 10:51:54.640604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.358 [2024-10-30 10:51:54.640676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:33.358 [2024-10-30 10:51:54.640928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.358 [2024-10-30 10:51:54.643640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.358 [2024-10-30 10:51:54.643815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:33.358 [2024-10-30 10:51:54.644004] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:33.358 [2024-10-30 10:51:54.644196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:33.358 pt1 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.358 "name": "raid_bdev1", 00:23:33.358 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:33.358 "strip_size_kb": 0, 00:23:33.358 "state": "configuring", 00:23:33.358 "raid_level": "raid1", 00:23:33.358 "superblock": true, 00:23:33.358 "num_base_bdevs": 2, 00:23:33.358 "num_base_bdevs_discovered": 1, 00:23:33.358 "num_base_bdevs_operational": 2, 00:23:33.358 "base_bdevs_list": [ 00:23:33.358 { 00:23:33.358 "name": "pt1", 00:23:33.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:33.358 "is_configured": true, 00:23:33.358 "data_offset": 256, 00:23:33.358 "data_size": 7936 00:23:33.358 }, 00:23:33.358 { 00:23:33.358 "name": null, 00:23:33.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:33.358 "is_configured": false, 00:23:33.358 "data_offset": 256, 00:23:33.358 "data_size": 7936 00:23:33.358 } 00:23:33.358 ] 00:23:33.358 }' 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.358 10:51:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.927 [2024-10-30 10:51:55.140637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:33.927 [2024-10-30 10:51:55.140731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.927 [2024-10-30 10:51:55.140761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:33.927 [2024-10-30 10:51:55.140778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.927 [2024-10-30 10:51:55.141026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.927 [2024-10-30 10:51:55.141070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:33.927 [2024-10-30 10:51:55.141142] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:33.927 [2024-10-30 10:51:55.141181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:33.927 [2024-10-30 10:51:55.141299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:33.927 [2024-10-30 10:51:55.141320] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:33.927 [2024-10-30 10:51:55.141435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:33.927 [2024-10-30 10:51:55.141545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:33.927 [2024-10-30 10:51:55.141578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:33.927 [2024-10-30 10:51:55.141663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.927 pt2 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.927 "name": "raid_bdev1", 00:23:33.927 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:33.927 "strip_size_kb": 0, 00:23:33.927 "state": "online", 00:23:33.927 "raid_level": "raid1", 00:23:33.927 "superblock": true, 00:23:33.927 "num_base_bdevs": 2, 00:23:33.927 "num_base_bdevs_discovered": 2, 00:23:33.927 "num_base_bdevs_operational": 2, 00:23:33.927 "base_bdevs_list": [ 00:23:33.927 { 00:23:33.927 "name": "pt1", 00:23:33.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:33.927 "is_configured": true, 00:23:33.927 "data_offset": 256, 00:23:33.927 "data_size": 7936 00:23:33.927 }, 00:23:33.927 { 00:23:33.927 "name": "pt2", 00:23:33.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:33.927 "is_configured": true, 00:23:33.927 "data_offset": 256, 00:23:33.927 "data_size": 7936 00:23:33.927 } 00:23:33.927 ] 00:23:33.927 }' 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.927 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:34.495 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:34.495 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:34.495 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:34.495 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:34.495 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:23:34.495 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:34.495 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:34.496 [2024-10-30 10:51:55.694016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:34.496 "name": "raid_bdev1", 00:23:34.496 "aliases": [ 00:23:34.496 "cd85ac4a-6897-4675-954e-cba1390e5c2b" 00:23:34.496 ], 00:23:34.496 "product_name": "Raid Volume", 00:23:34.496 "block_size": 4128, 00:23:34.496 "num_blocks": 7936, 00:23:34.496 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:34.496 "md_size": 32, 00:23:34.496 "md_interleave": true, 00:23:34.496 "dif_type": 0, 00:23:34.496 "assigned_rate_limits": { 00:23:34.496 "rw_ios_per_sec": 0, 00:23:34.496 "rw_mbytes_per_sec": 0, 00:23:34.496 "r_mbytes_per_sec": 0, 00:23:34.496 "w_mbytes_per_sec": 0 00:23:34.496 }, 00:23:34.496 "claimed": false, 00:23:34.496 "zoned": false, 00:23:34.496 "supported_io_types": { 00:23:34.496 "read": true, 00:23:34.496 "write": true, 00:23:34.496 "unmap": false, 00:23:34.496 "flush": false, 00:23:34.496 "reset": true, 00:23:34.496 "nvme_admin": false, 00:23:34.496 "nvme_io": false, 00:23:34.496 "nvme_io_md": false, 00:23:34.496 "write_zeroes": true, 00:23:34.496 "zcopy": false, 00:23:34.496 "get_zone_info": false, 00:23:34.496 "zone_management": false, 00:23:34.496 "zone_append": false, 00:23:34.496 "compare": false, 00:23:34.496 "compare_and_write": false, 00:23:34.496 "abort": false, 00:23:34.496 "seek_hole": false, 00:23:34.496 "seek_data": false, 00:23:34.496 "copy": false, 00:23:34.496 "nvme_iov_md": false 00:23:34.496 }, 00:23:34.496 "memory_domains": [ 00:23:34.496 { 00:23:34.496 "dma_device_id": "system", 00:23:34.496 "dma_device_type": 1 00:23:34.496 }, 00:23:34.496 { 00:23:34.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.496 "dma_device_type": 2 00:23:34.496 }, 00:23:34.496 { 00:23:34.496 "dma_device_id": "system", 00:23:34.496 "dma_device_type": 1 00:23:34.496 }, 00:23:34.496 { 00:23:34.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.496 "dma_device_type": 2 00:23:34.496 } 00:23:34.496 ], 00:23:34.496 "driver_specific": { 00:23:34.496 "raid": { 00:23:34.496 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:34.496 "strip_size_kb": 0, 00:23:34.496 "state": "online", 00:23:34.496 "raid_level": "raid1", 00:23:34.496 "superblock": true, 00:23:34.496 "num_base_bdevs": 2, 00:23:34.496 "num_base_bdevs_discovered": 2, 00:23:34.496 "num_base_bdevs_operational": 2, 00:23:34.496 "base_bdevs_list": [ 00:23:34.496 { 00:23:34.496 "name": "pt1", 00:23:34.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:34.496 "is_configured": true, 00:23:34.496 "data_offset": 256, 00:23:34.496 "data_size": 7936 00:23:34.496 }, 00:23:34.496 { 00:23:34.496 "name": "pt2", 00:23:34.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:34.496 "is_configured": true, 00:23:34.496 "data_offset": 256, 00:23:34.496 "data_size": 7936 00:23:34.496 } 00:23:34.496 ] 00:23:34.496 } 00:23:34.496 } 00:23:34.496 }' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:34.496 pt2' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.496 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:34.496 [2024-10-30 10:51:55.958196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:34.754 10:51:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' cd85ac4a-6897-4675-954e-cba1390e5c2b '!=' cd85ac4a-6897-4675-954e-cba1390e5c2b ']' 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:34.754 [2024-10-30 10:51:56.009785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:34.754 "name": "raid_bdev1", 00:23:34.754 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:34.754 "strip_size_kb": 0, 00:23:34.754 "state": "online", 00:23:34.754 "raid_level": "raid1", 00:23:34.754 "superblock": true, 00:23:34.754 "num_base_bdevs": 2, 00:23:34.754 "num_base_bdevs_discovered": 1, 00:23:34.754 "num_base_bdevs_operational": 1, 00:23:34.754 "base_bdevs_list": [ 00:23:34.754 { 00:23:34.754 "name": null, 00:23:34.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.754 "is_configured": false, 00:23:34.754 "data_offset": 0, 00:23:34.754 "data_size": 7936 00:23:34.754 }, 00:23:34.754 { 00:23:34.754 "name": "pt2", 00:23:34.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:34.754 "is_configured": true, 00:23:34.754 "data_offset": 256, 00:23:34.754 "data_size": 7936 00:23:34.754 } 00:23:34.754 ] 00:23:34.754 }' 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:34.754 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.321 [2024-10-30 10:51:56.562018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.321 [2024-10-30 10:51:56.562081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:35.321 [2024-10-30 10:51:56.562229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.321 [2024-10-30 10:51:56.562334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.321 [2024-10-30 10:51:56.562397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:35.321 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.322 [2024-10-30 10:51:56.633923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:35.322 [2024-10-30 10:51:56.634008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.322 [2024-10-30 10:51:56.634036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:35.322 [2024-10-30 10:51:56.634054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.322 [2024-10-30 10:51:56.636677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.322 [2024-10-30 10:51:56.636731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:35.322 [2024-10-30 10:51:56.636807] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:35.322 [2024-10-30 10:51:56.636876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:35.322 [2024-10-30 10:51:56.636989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:35.322 [2024-10-30 10:51:56.637013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:35.322 [2024-10-30 10:51:56.637134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:35.322 [2024-10-30 10:51:56.637228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:35.322 [2024-10-30 10:51:56.637249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:35.322 [2024-10-30 10:51:56.637340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.322 pt2 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.322 "name": "raid_bdev1", 00:23:35.322 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:35.322 "strip_size_kb": 0, 00:23:35.322 "state": "online", 00:23:35.322 "raid_level": "raid1", 00:23:35.322 "superblock": true, 00:23:35.322 "num_base_bdevs": 2, 00:23:35.322 "num_base_bdevs_discovered": 1, 00:23:35.322 "num_base_bdevs_operational": 1, 00:23:35.322 "base_bdevs_list": [ 00:23:35.322 { 00:23:35.322 "name": null, 00:23:35.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.322 "is_configured": false, 00:23:35.322 "data_offset": 256, 00:23:35.322 "data_size": 7936 00:23:35.322 }, 00:23:35.322 { 00:23:35.322 "name": "pt2", 00:23:35.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:35.322 "is_configured": true, 00:23:35.322 "data_offset": 256, 00:23:35.322 "data_size": 7936 00:23:35.322 } 00:23:35.322 ] 00:23:35.322 }' 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.322 10:51:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.898 [2024-10-30 10:51:57.158074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.898 [2024-10-30 10:51:57.158111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:35.898 [2024-10-30 10:51:57.158218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.898 [2024-10-30 10:51:57.158300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.898 [2024-10-30 10:51:57.158318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.898 [2024-10-30 10:51:57.226150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:35.898 [2024-10-30 10:51:57.226233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.898 [2024-10-30 10:51:57.226267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:35.898 [2024-10-30 10:51:57.226282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.898 [2024-10-30 10:51:57.228949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.898 [2024-10-30 10:51:57.229174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:35.898 [2024-10-30 10:51:57.229270] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:35.898 [2024-10-30 10:51:57.229336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:35.898 [2024-10-30 10:51:57.229476] bdev_raid.c:3679:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:35.898 [2024-10-30 10:51:57.229510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.898 [2024-10-30 10:51:57.229536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:35.898 [2024-10-30 10:51:57.229621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:35.898 [2024-10-30 10:51:57.229737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:35.898 [2024-10-30 10:51:57.229764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:35.898 [2024-10-30 10:51:57.229840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:35.898 [2024-10-30 10:51:57.229925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:35.898 [2024-10-30 10:51:57.229943] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:35.898 [2024-10-30 10:51:57.230129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.898 pt1 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.898 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.899 "name": "raid_bdev1", 00:23:35.899 "uuid": "cd85ac4a-6897-4675-954e-cba1390e5c2b", 00:23:35.899 "strip_size_kb": 0, 00:23:35.899 "state": "online", 00:23:35.899 "raid_level": "raid1", 00:23:35.899 "superblock": true, 00:23:35.899 "num_base_bdevs": 2, 00:23:35.899 "num_base_bdevs_discovered": 1, 00:23:35.899 "num_base_bdevs_operational": 1, 00:23:35.899 "base_bdevs_list": [ 00:23:35.899 { 00:23:35.899 "name": null, 00:23:35.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.899 "is_configured": false, 00:23:35.899 "data_offset": 256, 00:23:35.899 "data_size": 7936 00:23:35.899 }, 00:23:35.899 { 00:23:35.899 "name": "pt2", 00:23:35.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:35.899 "is_configured": true, 00:23:35.899 "data_offset": 256, 00:23:35.899 "data_size": 7936 00:23:35.899 } 00:23:35.899 ] 00:23:35.899 }' 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.899 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:36.489 [2024-10-30 10:51:57.806600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' cd85ac4a-6897-4675-954e-cba1390e5c2b '!=' cd85ac4a-6897-4675-954e-cba1390e5c2b ']' 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89340 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89340 ']' 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89340 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89340 00:23:36.489 killing process with pid 89340 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89340' 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 89340 00:23:36.489 [2024-10-30 10:51:57.891131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:36.489 10:51:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 89340 00:23:36.489 [2024-10-30 10:51:57.891267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.489 [2024-10-30 10:51:57.891335] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:36.489 [2024-10-30 10:51:57.891358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:36.747 [2024-10-30 10:51:58.079118] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:37.683 10:51:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:23:37.683 ************************************ 00:23:37.683 END TEST raid_superblock_test_md_interleaved 00:23:37.683 ************************************ 00:23:37.683 00:23:37.683 real 0m6.731s 00:23:37.683 user 0m10.673s 00:23:37.683 sys 0m0.985s 00:23:37.683 10:51:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:37.683 10:51:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:37.683 10:51:59 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:23:37.683 10:51:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:37.683 10:51:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:37.683 10:51:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:37.943 ************************************ 00:23:37.943 START TEST raid_rebuild_test_sb_md_interleaved 00:23:37.943 ************************************ 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89670 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89670 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89670 ']' 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:37.943 10:51:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:37.943 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:37.943 Zero copy mechanism will not be used. 00:23:37.943 [2024-10-30 10:51:59.278295] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:23:37.943 [2024-10-30 10:51:59.278483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89670 ] 00:23:38.201 [2024-10-30 10:51:59.460487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.201 [2024-10-30 10:51:59.594193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.458 [2024-10-30 10:51:59.804233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:38.458 [2024-10-30 10:51:59.804514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.026 BaseBdev1_malloc 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.026 [2024-10-30 10:52:00.267601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:39.026 [2024-10-30 10:52:00.267677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.026 [2024-10-30 10:52:00.267713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:39.026 [2024-10-30 10:52:00.267731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.026 [2024-10-30 10:52:00.270319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.026 [2024-10-30 10:52:00.270551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:39.026 BaseBdev1 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.026 BaseBdev2_malloc 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.026 [2024-10-30 10:52:00.326431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:39.026 [2024-10-30 10:52:00.326533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.026 [2024-10-30 10:52:00.326596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:39.026 [2024-10-30 10:52:00.326616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.026 [2024-10-30 10:52:00.329250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.026 [2024-10-30 10:52:00.329300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:39.026 BaseBdev2 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.026 spare_malloc 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.026 spare_delay 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.026 [2024-10-30 10:52:00.404961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:39.026 [2024-10-30 10:52:00.405087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.026 [2024-10-30 10:52:00.405119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:39.026 [2024-10-30 10:52:00.405137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.026 [2024-10-30 10:52:00.407749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.026 [2024-10-30 10:52:00.407826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:39.026 spare 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.026 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.026 [2024-10-30 10:52:00.413048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:39.026 [2024-10-30 10:52:00.415668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:39.026 [2024-10-30 10:52:00.416068] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:39.027 [2024-10-30 10:52:00.416095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:39.027 [2024-10-30 10:52:00.416198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:39.027 [2024-10-30 10:52:00.416335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:39.027 [2024-10-30 10:52:00.416350] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:39.027 [2024-10-30 10:52:00.416445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.027 "name": "raid_bdev1", 00:23:39.027 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:39.027 "strip_size_kb": 0, 00:23:39.027 "state": "online", 00:23:39.027 "raid_level": "raid1", 00:23:39.027 "superblock": true, 00:23:39.027 "num_base_bdevs": 2, 00:23:39.027 "num_base_bdevs_discovered": 2, 00:23:39.027 "num_base_bdevs_operational": 2, 00:23:39.027 "base_bdevs_list": [ 00:23:39.027 { 00:23:39.027 "name": "BaseBdev1", 00:23:39.027 "uuid": "fb863fb2-ea36-56c8-b340-25ac6c0e7618", 00:23:39.027 "is_configured": true, 00:23:39.027 "data_offset": 256, 00:23:39.027 "data_size": 7936 00:23:39.027 }, 00:23:39.027 { 00:23:39.027 "name": "BaseBdev2", 00:23:39.027 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:39.027 "is_configured": true, 00:23:39.027 "data_offset": 256, 00:23:39.027 "data_size": 7936 00:23:39.027 } 00:23:39.027 ] 00:23:39.027 }' 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.027 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.595 [2024-10-30 10:52:00.897659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.595 10:52:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.595 [2024-10-30 10:52:01.001229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:39.595 "name": "raid_bdev1", 00:23:39.595 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:39.595 "strip_size_kb": 0, 00:23:39.595 "state": "online", 00:23:39.595 "raid_level": "raid1", 00:23:39.595 "superblock": true, 00:23:39.595 "num_base_bdevs": 2, 00:23:39.595 "num_base_bdevs_discovered": 1, 00:23:39.595 "num_base_bdevs_operational": 1, 00:23:39.595 "base_bdevs_list": [ 00:23:39.595 { 00:23:39.595 "name": null, 00:23:39.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.595 "is_configured": false, 00:23:39.595 "data_offset": 0, 00:23:39.595 "data_size": 7936 00:23:39.595 }, 00:23:39.595 { 00:23:39.595 "name": "BaseBdev2", 00:23:39.595 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:39.595 "is_configured": true, 00:23:39.595 "data_offset": 256, 00:23:39.595 "data_size": 7936 00:23:39.595 } 00:23:39.595 ] 00:23:39.595 }' 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:39.595 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:40.163 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.163 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:40.163 [2024-10-30 10:52:01.521456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:40.163 [2024-10-30 10:52:01.538898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:40.163 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.163 10:52:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:40.163 [2024-10-30 10:52:01.541455] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:41.095 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.095 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.095 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:41.095 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:41.095 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.095 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.095 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.095 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.095 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:41.354 "name": "raid_bdev1", 00:23:41.354 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:41.354 "strip_size_kb": 0, 00:23:41.354 "state": "online", 00:23:41.354 "raid_level": "raid1", 00:23:41.354 "superblock": true, 00:23:41.354 "num_base_bdevs": 2, 00:23:41.354 "num_base_bdevs_discovered": 2, 00:23:41.354 "num_base_bdevs_operational": 2, 00:23:41.354 "process": { 00:23:41.354 "type": "rebuild", 00:23:41.354 "target": "spare", 00:23:41.354 "progress": { 00:23:41.354 "blocks": 2560, 00:23:41.354 "percent": 32 00:23:41.354 } 00:23:41.354 }, 00:23:41.354 "base_bdevs_list": [ 00:23:41.354 { 00:23:41.354 "name": "spare", 00:23:41.354 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:41.354 "is_configured": true, 00:23:41.354 "data_offset": 256, 00:23:41.354 "data_size": 7936 00:23:41.354 }, 00:23:41.354 { 00:23:41.354 "name": "BaseBdev2", 00:23:41.354 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:41.354 "is_configured": true, 00:23:41.354 "data_offset": 256, 00:23:41.354 "data_size": 7936 00:23:41.354 } 00:23:41.354 ] 00:23:41.354 }' 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:41.354 [2024-10-30 10:52:02.714699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.354 [2024-10-30 10:52:02.750598] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:41.354 [2024-10-30 10:52:02.750695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.354 [2024-10-30 10:52:02.750721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:41.354 [2024-10-30 10:52:02.750740] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:41.354 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.612 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.613 "name": "raid_bdev1", 00:23:41.613 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:41.613 "strip_size_kb": 0, 00:23:41.613 "state": "online", 00:23:41.613 "raid_level": "raid1", 00:23:41.613 "superblock": true, 00:23:41.613 "num_base_bdevs": 2, 00:23:41.613 "num_base_bdevs_discovered": 1, 00:23:41.613 "num_base_bdevs_operational": 1, 00:23:41.613 "base_bdevs_list": [ 00:23:41.613 { 00:23:41.613 "name": null, 00:23:41.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.613 "is_configured": false, 00:23:41.613 "data_offset": 0, 00:23:41.613 "data_size": 7936 00:23:41.613 }, 00:23:41.613 { 00:23:41.613 "name": "BaseBdev2", 00:23:41.613 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:41.613 "is_configured": true, 00:23:41.613 "data_offset": 256, 00:23:41.613 "data_size": 7936 00:23:41.613 } 00:23:41.613 ] 00:23:41.613 }' 00:23:41.613 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.613 10:52:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:41.870 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.128 "name": "raid_bdev1", 00:23:42.128 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:42.128 "strip_size_kb": 0, 00:23:42.128 "state": "online", 00:23:42.128 "raid_level": "raid1", 00:23:42.128 "superblock": true, 00:23:42.128 "num_base_bdevs": 2, 00:23:42.128 "num_base_bdevs_discovered": 1, 00:23:42.128 "num_base_bdevs_operational": 1, 00:23:42.128 "base_bdevs_list": [ 00:23:42.128 { 00:23:42.128 "name": null, 00:23:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.128 "is_configured": false, 00:23:42.128 "data_offset": 0, 00:23:42.128 "data_size": 7936 00:23:42.128 }, 00:23:42.128 { 00:23:42.128 "name": "BaseBdev2", 00:23:42.128 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:42.128 "is_configured": true, 00:23:42.128 "data_offset": 256, 00:23:42.128 "data_size": 7936 00:23:42.128 } 00:23:42.128 ] 00:23:42.128 }' 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:42.128 [2024-10-30 10:52:03.480556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:42.128 [2024-10-30 10:52:03.497887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.128 10:52:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:42.128 [2024-10-30 10:52:03.500621] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:43.093 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.093 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.093 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.093 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.093 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.094 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.094 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.094 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.094 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.094 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.352 "name": "raid_bdev1", 00:23:43.352 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:43.352 "strip_size_kb": 0, 00:23:43.352 "state": "online", 00:23:43.352 "raid_level": "raid1", 00:23:43.352 "superblock": true, 00:23:43.352 "num_base_bdevs": 2, 00:23:43.352 "num_base_bdevs_discovered": 2, 00:23:43.352 "num_base_bdevs_operational": 2, 00:23:43.352 "process": { 00:23:43.352 "type": "rebuild", 00:23:43.352 "target": "spare", 00:23:43.352 "progress": { 00:23:43.352 "blocks": 2560, 00:23:43.352 "percent": 32 00:23:43.352 } 00:23:43.352 }, 00:23:43.352 "base_bdevs_list": [ 00:23:43.352 { 00:23:43.352 "name": "spare", 00:23:43.352 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:43.352 "is_configured": true, 00:23:43.352 "data_offset": 256, 00:23:43.352 "data_size": 7936 00:23:43.352 }, 00:23:43.352 { 00:23:43.352 "name": "BaseBdev2", 00:23:43.352 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:43.352 "is_configured": true, 00:23:43.352 "data_offset": 256, 00:23:43.352 "data_size": 7936 00:23:43.352 } 00:23:43.352 ] 00:23:43.352 }' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:43.352 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=798 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:43.352 "name": "raid_bdev1", 00:23:43.352 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:43.352 "strip_size_kb": 0, 00:23:43.352 "state": "online", 00:23:43.352 "raid_level": "raid1", 00:23:43.352 "superblock": true, 00:23:43.352 "num_base_bdevs": 2, 00:23:43.352 "num_base_bdevs_discovered": 2, 00:23:43.352 "num_base_bdevs_operational": 2, 00:23:43.352 "process": { 00:23:43.352 "type": "rebuild", 00:23:43.352 "target": "spare", 00:23:43.352 "progress": { 00:23:43.352 "blocks": 2816, 00:23:43.352 "percent": 35 00:23:43.352 } 00:23:43.352 }, 00:23:43.352 "base_bdevs_list": [ 00:23:43.352 { 00:23:43.352 "name": "spare", 00:23:43.352 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:43.352 "is_configured": true, 00:23:43.352 "data_offset": 256, 00:23:43.352 "data_size": 7936 00:23:43.352 }, 00:23:43.352 { 00:23:43.352 "name": "BaseBdev2", 00:23:43.352 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:43.352 "is_configured": true, 00:23:43.352 "data_offset": 256, 00:23:43.352 "data_size": 7936 00:23:43.352 } 00:23:43.352 ] 00:23:43.352 }' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.352 10:52:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:44.732 "name": "raid_bdev1", 00:23:44.732 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:44.732 "strip_size_kb": 0, 00:23:44.732 "state": "online", 00:23:44.732 "raid_level": "raid1", 00:23:44.732 "superblock": true, 00:23:44.732 "num_base_bdevs": 2, 00:23:44.732 "num_base_bdevs_discovered": 2, 00:23:44.732 "num_base_bdevs_operational": 2, 00:23:44.732 "process": { 00:23:44.732 "type": "rebuild", 00:23:44.732 "target": "spare", 00:23:44.732 "progress": { 00:23:44.732 "blocks": 5888, 00:23:44.732 "percent": 74 00:23:44.732 } 00:23:44.732 }, 00:23:44.732 "base_bdevs_list": [ 00:23:44.732 { 00:23:44.732 "name": "spare", 00:23:44.732 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:44.732 "is_configured": true, 00:23:44.732 "data_offset": 256, 00:23:44.732 "data_size": 7936 00:23:44.732 }, 00:23:44.732 { 00:23:44.732 "name": "BaseBdev2", 00:23:44.732 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:44.732 "is_configured": true, 00:23:44.732 "data_offset": 256, 00:23:44.732 "data_size": 7936 00:23:44.732 } 00:23:44.732 ] 00:23:44.732 }' 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.732 10:52:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:45.300 [2024-10-30 10:52:06.624510] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:45.300 [2024-10-30 10:52:06.624694] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:45.300 [2024-10-30 10:52:06.624855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.559 10:52:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.559 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:45.819 "name": "raid_bdev1", 00:23:45.819 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:45.819 "strip_size_kb": 0, 00:23:45.819 "state": "online", 00:23:45.819 "raid_level": "raid1", 00:23:45.819 "superblock": true, 00:23:45.819 "num_base_bdevs": 2, 00:23:45.819 "num_base_bdevs_discovered": 2, 00:23:45.819 "num_base_bdevs_operational": 2, 00:23:45.819 "base_bdevs_list": [ 00:23:45.819 { 00:23:45.819 "name": "spare", 00:23:45.819 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:45.819 "is_configured": true, 00:23:45.819 "data_offset": 256, 00:23:45.819 "data_size": 7936 00:23:45.819 }, 00:23:45.819 { 00:23:45.819 "name": "BaseBdev2", 00:23:45.819 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:45.819 "is_configured": true, 00:23:45.819 "data_offset": 256, 00:23:45.819 "data_size": 7936 00:23:45.819 } 00:23:45.819 ] 00:23:45.819 }' 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:45.819 "name": "raid_bdev1", 00:23:45.819 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:45.819 "strip_size_kb": 0, 00:23:45.819 "state": "online", 00:23:45.819 "raid_level": "raid1", 00:23:45.819 "superblock": true, 00:23:45.819 "num_base_bdevs": 2, 00:23:45.819 "num_base_bdevs_discovered": 2, 00:23:45.819 "num_base_bdevs_operational": 2, 00:23:45.819 "base_bdevs_list": [ 00:23:45.819 { 00:23:45.819 "name": "spare", 00:23:45.819 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:45.819 "is_configured": true, 00:23:45.819 "data_offset": 256, 00:23:45.819 "data_size": 7936 00:23:45.819 }, 00:23:45.819 { 00:23:45.819 "name": "BaseBdev2", 00:23:45.819 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:45.819 "is_configured": true, 00:23:45.819 "data_offset": 256, 00:23:45.819 "data_size": 7936 00:23:45.819 } 00:23:45.819 ] 00:23:45.819 }' 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:45.819 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.078 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.079 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.079 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.079 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.079 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.079 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.079 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.079 "name": "raid_bdev1", 00:23:46.079 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:46.079 "strip_size_kb": 0, 00:23:46.079 "state": "online", 00:23:46.079 "raid_level": "raid1", 00:23:46.079 "superblock": true, 00:23:46.079 "num_base_bdevs": 2, 00:23:46.079 "num_base_bdevs_discovered": 2, 00:23:46.079 "num_base_bdevs_operational": 2, 00:23:46.079 "base_bdevs_list": [ 00:23:46.079 { 00:23:46.079 "name": "spare", 00:23:46.079 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:46.079 "is_configured": true, 00:23:46.079 "data_offset": 256, 00:23:46.079 "data_size": 7936 00:23:46.079 }, 00:23:46.079 { 00:23:46.079 "name": "BaseBdev2", 00:23:46.079 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:46.079 "is_configured": true, 00:23:46.079 "data_offset": 256, 00:23:46.079 "data_size": 7936 00:23:46.079 } 00:23:46.079 ] 00:23:46.079 }' 00:23:46.079 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.079 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 [2024-10-30 10:52:07.853903] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.647 [2024-10-30 10:52:07.853967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:46.647 [2024-10-30 10:52:07.854121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:46.647 [2024-10-30 10:52:07.854219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:46.647 [2024-10-30 10:52:07.854237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 [2024-10-30 10:52:07.925862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:46.647 [2024-10-30 10:52:07.925947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.647 [2024-10-30 10:52:07.925980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:46.647 [2024-10-30 10:52:07.926018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.647 [2024-10-30 10:52:07.928905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.647 [2024-10-30 10:52:07.930146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:46.647 [2024-10-30 10:52:07.930246] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:46.647 [2024-10-30 10:52:07.930337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:46.647 [2024-10-30 10:52:07.930545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:46.647 spare 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 10:52:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 [2024-10-30 10:52:08.030700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:46.647 [2024-10-30 10:52:08.031000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:46.647 [2024-10-30 10:52:08.031199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:46.647 [2024-10-30 10:52:08.031342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:46.647 [2024-10-30 10:52:08.031358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:46.647 [2024-10-30 10:52:08.031499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.647 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.647 "name": "raid_bdev1", 00:23:46.647 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:46.647 "strip_size_kb": 0, 00:23:46.647 "state": "online", 00:23:46.647 "raid_level": "raid1", 00:23:46.647 "superblock": true, 00:23:46.647 "num_base_bdevs": 2, 00:23:46.647 "num_base_bdevs_discovered": 2, 00:23:46.647 "num_base_bdevs_operational": 2, 00:23:46.647 "base_bdevs_list": [ 00:23:46.647 { 00:23:46.647 "name": "spare", 00:23:46.647 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:46.648 "is_configured": true, 00:23:46.648 "data_offset": 256, 00:23:46.648 "data_size": 7936 00:23:46.648 }, 00:23:46.648 { 00:23:46.648 "name": "BaseBdev2", 00:23:46.648 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:46.648 "is_configured": true, 00:23:46.648 "data_offset": 256, 00:23:46.648 "data_size": 7936 00:23:46.648 } 00:23:46.648 ] 00:23:46.648 }' 00:23:46.648 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.648 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:47.215 "name": "raid_bdev1", 00:23:47.215 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:47.215 "strip_size_kb": 0, 00:23:47.215 "state": "online", 00:23:47.215 "raid_level": "raid1", 00:23:47.215 "superblock": true, 00:23:47.215 "num_base_bdevs": 2, 00:23:47.215 "num_base_bdevs_discovered": 2, 00:23:47.215 "num_base_bdevs_operational": 2, 00:23:47.215 "base_bdevs_list": [ 00:23:47.215 { 00:23:47.215 "name": "spare", 00:23:47.215 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:47.215 "is_configured": true, 00:23:47.215 "data_offset": 256, 00:23:47.215 "data_size": 7936 00:23:47.215 }, 00:23:47.215 { 00:23:47.215 "name": "BaseBdev2", 00:23:47.215 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:47.215 "is_configured": true, 00:23:47.215 "data_offset": 256, 00:23:47.215 "data_size": 7936 00:23:47.215 } 00:23:47.215 ] 00:23:47.215 }' 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.215 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.474 [2024-10-30 10:52:08.722713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.474 "name": "raid_bdev1", 00:23:47.474 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:47.474 "strip_size_kb": 0, 00:23:47.474 "state": "online", 00:23:47.474 "raid_level": "raid1", 00:23:47.474 "superblock": true, 00:23:47.474 "num_base_bdevs": 2, 00:23:47.474 "num_base_bdevs_discovered": 1, 00:23:47.474 "num_base_bdevs_operational": 1, 00:23:47.474 "base_bdevs_list": [ 00:23:47.474 { 00:23:47.474 "name": null, 00:23:47.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.474 "is_configured": false, 00:23:47.474 "data_offset": 0, 00:23:47.474 "data_size": 7936 00:23:47.474 }, 00:23:47.474 { 00:23:47.474 "name": "BaseBdev2", 00:23:47.474 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:47.474 "is_configured": true, 00:23:47.474 "data_offset": 256, 00:23:47.474 "data_size": 7936 00:23:47.474 } 00:23:47.474 ] 00:23:47.474 }' 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.474 10:52:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.043 10:52:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:48.043 10:52:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.043 10:52:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.043 [2024-10-30 10:52:09.234920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:48.043 [2024-10-30 10:52:09.235238] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:48.043 [2024-10-30 10:52:09.235267] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:48.043 [2024-10-30 10:52:09.235319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:48.043 [2024-10-30 10:52:09.252695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:48.043 10:52:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.043 10:52:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:48.043 [2024-10-30 10:52:09.255599] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.981 "name": "raid_bdev1", 00:23:48.981 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:48.981 "strip_size_kb": 0, 00:23:48.981 "state": "online", 00:23:48.981 "raid_level": "raid1", 00:23:48.981 "superblock": true, 00:23:48.981 "num_base_bdevs": 2, 00:23:48.981 "num_base_bdevs_discovered": 2, 00:23:48.981 "num_base_bdevs_operational": 2, 00:23:48.981 "process": { 00:23:48.981 "type": "rebuild", 00:23:48.981 "target": "spare", 00:23:48.981 "progress": { 00:23:48.981 "blocks": 2560, 00:23:48.981 "percent": 32 00:23:48.981 } 00:23:48.981 }, 00:23:48.981 "base_bdevs_list": [ 00:23:48.981 { 00:23:48.981 "name": "spare", 00:23:48.981 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:48.981 "is_configured": true, 00:23:48.981 "data_offset": 256, 00:23:48.981 "data_size": 7936 00:23:48.981 }, 00:23:48.981 { 00:23:48.981 "name": "BaseBdev2", 00:23:48.981 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:48.981 "is_configured": true, 00:23:48.981 "data_offset": 256, 00:23:48.981 "data_size": 7936 00:23:48.981 } 00:23:48.981 ] 00:23:48.981 }' 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.981 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:48.981 [2024-10-30 10:52:10.420684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:49.240 [2024-10-30 10:52:10.464658] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:49.240 [2024-10-30 10:52:10.464755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.240 [2024-10-30 10:52:10.464779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:49.240 [2024-10-30 10:52:10.464793] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.240 "name": "raid_bdev1", 00:23:49.240 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:49.240 "strip_size_kb": 0, 00:23:49.240 "state": "online", 00:23:49.240 "raid_level": "raid1", 00:23:49.240 "superblock": true, 00:23:49.240 "num_base_bdevs": 2, 00:23:49.240 "num_base_bdevs_discovered": 1, 00:23:49.240 "num_base_bdevs_operational": 1, 00:23:49.240 "base_bdevs_list": [ 00:23:49.240 { 00:23:49.240 "name": null, 00:23:49.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.240 "is_configured": false, 00:23:49.240 "data_offset": 0, 00:23:49.240 "data_size": 7936 00:23:49.240 }, 00:23:49.240 { 00:23:49.240 "name": "BaseBdev2", 00:23:49.240 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:49.240 "is_configured": true, 00:23:49.240 "data_offset": 256, 00:23:49.240 "data_size": 7936 00:23:49.240 } 00:23:49.240 ] 00:23:49.240 }' 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.240 10:52:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.808 10:52:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:49.808 10:52:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.808 10:52:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:49.808 [2024-10-30 10:52:11.021312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:49.808 [2024-10-30 10:52:11.021604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.808 [2024-10-30 10:52:11.021685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:49.808 [2024-10-30 10:52:11.021891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.808 [2024-10-30 10:52:11.022177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.808 [2024-10-30 10:52:11.022220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:49.808 [2024-10-30 10:52:11.022301] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:49.808 [2024-10-30 10:52:11.022325] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:49.808 [2024-10-30 10:52:11.022339] bdev_raid.c:3752:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:49.808 [2024-10-30 10:52:11.022408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:49.808 [2024-10-30 10:52:11.038194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:49.808 spare 00:23:49.808 10:52:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.808 10:52:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:49.808 [2024-10-30 10:52:11.040642] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.790 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:50.790 "name": "raid_bdev1", 00:23:50.790 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:50.790 "strip_size_kb": 0, 00:23:50.790 "state": "online", 00:23:50.790 "raid_level": "raid1", 00:23:50.790 "superblock": true, 00:23:50.790 "num_base_bdevs": 2, 00:23:50.790 "num_base_bdevs_discovered": 2, 00:23:50.790 "num_base_bdevs_operational": 2, 00:23:50.790 "process": { 00:23:50.790 "type": "rebuild", 00:23:50.790 "target": "spare", 00:23:50.790 "progress": { 00:23:50.790 "blocks": 2560, 00:23:50.790 "percent": 32 00:23:50.790 } 00:23:50.790 }, 00:23:50.790 "base_bdevs_list": [ 00:23:50.790 { 00:23:50.790 "name": "spare", 00:23:50.790 "uuid": "fa763386-69ed-5b3b-8bc0-8a7884abd7b8", 00:23:50.790 "is_configured": true, 00:23:50.790 "data_offset": 256, 00:23:50.790 "data_size": 7936 00:23:50.790 }, 00:23:50.790 { 00:23:50.790 "name": "BaseBdev2", 00:23:50.790 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:50.790 "is_configured": true, 00:23:50.790 "data_offset": 256, 00:23:50.790 "data_size": 7936 00:23:50.791 } 00:23:50.791 ] 00:23:50.791 }' 00:23:50.791 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:50.791 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:50.791 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:50.791 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:50.791 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:50.791 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.791 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:50.791 [2024-10-30 10:52:12.206025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:50.791 [2024-10-30 10:52:12.248947] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:50.791 [2024-10-30 10:52:12.249242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.791 [2024-10-30 10:52:12.249394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:50.791 [2024-10-30 10:52:12.249447] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.049 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.049 "name": "raid_bdev1", 00:23:51.049 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:51.050 "strip_size_kb": 0, 00:23:51.050 "state": "online", 00:23:51.050 "raid_level": "raid1", 00:23:51.050 "superblock": true, 00:23:51.050 "num_base_bdevs": 2, 00:23:51.050 "num_base_bdevs_discovered": 1, 00:23:51.050 "num_base_bdevs_operational": 1, 00:23:51.050 "base_bdevs_list": [ 00:23:51.050 { 00:23:51.050 "name": null, 00:23:51.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.050 "is_configured": false, 00:23:51.050 "data_offset": 0, 00:23:51.050 "data_size": 7936 00:23:51.050 }, 00:23:51.050 { 00:23:51.050 "name": "BaseBdev2", 00:23:51.050 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:51.050 "is_configured": true, 00:23:51.050 "data_offset": 256, 00:23:51.050 "data_size": 7936 00:23:51.050 } 00:23:51.050 ] 00:23:51.050 }' 00:23:51.050 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.050 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:51.618 "name": "raid_bdev1", 00:23:51.618 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:51.618 "strip_size_kb": 0, 00:23:51.618 "state": "online", 00:23:51.618 "raid_level": "raid1", 00:23:51.618 "superblock": true, 00:23:51.618 "num_base_bdevs": 2, 00:23:51.618 "num_base_bdevs_discovered": 1, 00:23:51.618 "num_base_bdevs_operational": 1, 00:23:51.618 "base_bdevs_list": [ 00:23:51.618 { 00:23:51.618 "name": null, 00:23:51.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.618 "is_configured": false, 00:23:51.618 "data_offset": 0, 00:23:51.618 "data_size": 7936 00:23:51.618 }, 00:23:51.618 { 00:23:51.618 "name": "BaseBdev2", 00:23:51.618 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:51.618 "is_configured": true, 00:23:51.618 "data_offset": 256, 00:23:51.618 "data_size": 7936 00:23:51.618 } 00:23:51.618 ] 00:23:51.618 }' 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:51.618 [2024-10-30 10:52:12.982130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:51.618 [2024-10-30 10:52:12.982202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.618 [2024-10-30 10:52:12.982236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:51.618 [2024-10-30 10:52:12.982251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.618 [2024-10-30 10:52:12.982463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.618 [2024-10-30 10:52:12.982485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:51.618 [2024-10-30 10:52:12.982554] bdev_raid.c:3901:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:51.618 [2024-10-30 10:52:12.982574] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:51.618 [2024-10-30 10:52:12.982587] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:51.618 [2024-10-30 10:52:12.982600] bdev_raid.c:3888:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:51.618 BaseBdev1 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.618 10:52:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:52.553 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:52.553 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:52.553 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:52.554 10:52:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.554 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.812 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:52.812 "name": "raid_bdev1", 00:23:52.812 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:52.812 "strip_size_kb": 0, 00:23:52.812 "state": "online", 00:23:52.812 "raid_level": "raid1", 00:23:52.812 "superblock": true, 00:23:52.812 "num_base_bdevs": 2, 00:23:52.812 "num_base_bdevs_discovered": 1, 00:23:52.812 "num_base_bdevs_operational": 1, 00:23:52.812 "base_bdevs_list": [ 00:23:52.812 { 00:23:52.812 "name": null, 00:23:52.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.812 "is_configured": false, 00:23:52.812 "data_offset": 0, 00:23:52.812 "data_size": 7936 00:23:52.812 }, 00:23:52.812 { 00:23:52.812 "name": "BaseBdev2", 00:23:52.812 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:52.812 "is_configured": true, 00:23:52.812 "data_offset": 256, 00:23:52.812 "data_size": 7936 00:23:52.812 } 00:23:52.812 ] 00:23:52.812 }' 00:23:52.812 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:52.812 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:53.071 "name": "raid_bdev1", 00:23:53.071 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:53.071 "strip_size_kb": 0, 00:23:53.071 "state": "online", 00:23:53.071 "raid_level": "raid1", 00:23:53.071 "superblock": true, 00:23:53.071 "num_base_bdevs": 2, 00:23:53.071 "num_base_bdevs_discovered": 1, 00:23:53.071 "num_base_bdevs_operational": 1, 00:23:53.071 "base_bdevs_list": [ 00:23:53.071 { 00:23:53.071 "name": null, 00:23:53.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.071 "is_configured": false, 00:23:53.071 "data_offset": 0, 00:23:53.071 "data_size": 7936 00:23:53.071 }, 00:23:53.071 { 00:23:53.071 "name": "BaseBdev2", 00:23:53.071 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:53.071 "is_configured": true, 00:23:53.071 "data_offset": 256, 00:23:53.071 "data_size": 7936 00:23:53.071 } 00:23:53.071 ] 00:23:53.071 }' 00:23:53.071 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:53.330 [2024-10-30 10:52:14.654667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:53.330 [2024-10-30 10:52:14.654847] bdev_raid.c:3694:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:53.330 [2024-10-30 10:52:14.654874] bdev_raid.c:3713:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:53.330 request: 00:23:53.330 { 00:23:53.330 "base_bdev": "BaseBdev1", 00:23:53.330 "raid_bdev": "raid_bdev1", 00:23:53.330 "method": "bdev_raid_add_base_bdev", 00:23:53.330 "req_id": 1 00:23:53.330 } 00:23:53.330 Got JSON-RPC error response 00:23:53.330 response: 00:23:53.330 { 00:23:53.330 "code": -22, 00:23:53.330 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:53.330 } 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:53.330 10:52:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:54.266 "name": "raid_bdev1", 00:23:54.266 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:54.266 "strip_size_kb": 0, 00:23:54.266 "state": "online", 00:23:54.266 "raid_level": "raid1", 00:23:54.266 "superblock": true, 00:23:54.266 "num_base_bdevs": 2, 00:23:54.266 "num_base_bdevs_discovered": 1, 00:23:54.266 "num_base_bdevs_operational": 1, 00:23:54.266 "base_bdevs_list": [ 00:23:54.266 { 00:23:54.266 "name": null, 00:23:54.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.266 "is_configured": false, 00:23:54.266 "data_offset": 0, 00:23:54.266 "data_size": 7936 00:23:54.266 }, 00:23:54.266 { 00:23:54.266 "name": "BaseBdev2", 00:23:54.266 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:54.266 "is_configured": true, 00:23:54.266 "data_offset": 256, 00:23:54.266 "data_size": 7936 00:23:54.266 } 00:23:54.266 ] 00:23:54.266 }' 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:54.266 10:52:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:54.832 "name": "raid_bdev1", 00:23:54.832 "uuid": "36a8d302-ede5-4409-8f35-b1ddc91c1db5", 00:23:54.832 "strip_size_kb": 0, 00:23:54.832 "state": "online", 00:23:54.832 "raid_level": "raid1", 00:23:54.832 "superblock": true, 00:23:54.832 "num_base_bdevs": 2, 00:23:54.832 "num_base_bdevs_discovered": 1, 00:23:54.832 "num_base_bdevs_operational": 1, 00:23:54.832 "base_bdevs_list": [ 00:23:54.832 { 00:23:54.832 "name": null, 00:23:54.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.832 "is_configured": false, 00:23:54.832 "data_offset": 0, 00:23:54.832 "data_size": 7936 00:23:54.832 }, 00:23:54.832 { 00:23:54.832 "name": "BaseBdev2", 00:23:54.832 "uuid": "db2bc646-2000-572d-8079-73cbbb122e3d", 00:23:54.832 "is_configured": true, 00:23:54.832 "data_offset": 256, 00:23:54.832 "data_size": 7936 00:23:54.832 } 00:23:54.832 ] 00:23:54.832 }' 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:54.832 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89670 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89670 ']' 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89670 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89670 00:23:55.090 killing process with pid 89670 00:23:55.090 Received shutdown signal, test time was about 60.000000 seconds 00:23:55.090 00:23:55.090 Latency(us) 00:23:55.090 [2024-10-30T10:52:16.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.090 [2024-10-30T10:52:16.560Z] =================================================================================================================== 00:23:55.090 [2024-10-30T10:52:16.560Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89670' 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89670 00:23:55.090 [2024-10-30 10:52:16.380195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:55.090 10:52:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89670 00:23:55.090 [2024-10-30 10:52:16.380464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.090 [2024-10-30 10:52:16.380537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.090 [2024-10-30 10:52:16.380571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:55.348 [2024-10-30 10:52:16.636226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:56.296 10:52:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:23:56.296 00:23:56.296 real 0m18.481s 00:23:56.296 user 0m25.189s 00:23:56.296 sys 0m1.444s 00:23:56.296 ************************************ 00:23:56.296 END TEST raid_rebuild_test_sb_md_interleaved 00:23:56.296 ************************************ 00:23:56.296 10:52:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:56.296 10:52:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:56.296 10:52:17 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:23:56.296 10:52:17 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:23:56.296 10:52:17 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89670 ']' 00:23:56.296 10:52:17 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89670 00:23:56.296 10:52:17 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:23:56.296 ************************************ 00:23:56.296 END TEST bdev_raid 00:23:56.296 ************************************ 00:23:56.296 00:23:56.296 real 13m0.837s 00:23:56.296 user 18m26.194s 00:23:56.296 sys 1m45.180s 00:23:56.296 10:52:17 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:56.296 10:52:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:56.296 10:52:17 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:56.581 10:52:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:56.581 10:52:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:56.581 10:52:17 -- common/autotest_common.sh@10 -- # set +x 00:23:56.581 ************************************ 00:23:56.581 START TEST spdkcli_raid 00:23:56.581 ************************************ 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:56.581 * Looking for test storage... 00:23:56.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.581 10:52:17 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:56.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.581 --rc genhtml_branch_coverage=1 00:23:56.581 --rc genhtml_function_coverage=1 00:23:56.581 --rc genhtml_legend=1 00:23:56.581 --rc geninfo_all_blocks=1 00:23:56.581 --rc geninfo_unexecuted_blocks=1 00:23:56.581 00:23:56.581 ' 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:56.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.581 --rc genhtml_branch_coverage=1 00:23:56.581 --rc genhtml_function_coverage=1 00:23:56.581 --rc genhtml_legend=1 00:23:56.581 --rc geninfo_all_blocks=1 00:23:56.581 --rc geninfo_unexecuted_blocks=1 00:23:56.581 00:23:56.581 ' 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:56.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.581 --rc genhtml_branch_coverage=1 00:23:56.581 --rc genhtml_function_coverage=1 00:23:56.581 --rc genhtml_legend=1 00:23:56.581 --rc geninfo_all_blocks=1 00:23:56.581 --rc geninfo_unexecuted_blocks=1 00:23:56.581 00:23:56.581 ' 00:23:56.581 10:52:17 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:56.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.581 --rc genhtml_branch_coverage=1 00:23:56.581 --rc genhtml_function_coverage=1 00:23:56.581 --rc genhtml_legend=1 00:23:56.581 --rc geninfo_all_blocks=1 00:23:56.581 --rc geninfo_unexecuted_blocks=1 00:23:56.581 00:23:56.581 ' 00:23:56.581 10:52:17 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:56.581 10:52:17 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:56.581 10:52:17 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:56.581 10:52:17 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:56.581 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:56.582 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:56.582 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:56.582 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:56.582 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:56.582 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:56.582 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:56.582 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:56.582 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:56.582 10:52:17 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:23:56.582 10:52:17 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.582 10:52:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:56.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90353 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90353 00:23:56.582 10:52:17 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 90353 ']' 00:23:56.582 10:52:17 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.582 10:52:17 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:56.582 10:52:17 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:56.582 10:52:17 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.582 10:52:17 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:56.582 10:52:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:56.840 [2024-10-30 10:52:18.125789] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:23:56.840 [2024-10-30 10:52:18.126018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90353 ] 00:23:57.099 [2024-10-30 10:52:18.317912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:57.099 [2024-10-30 10:52:18.452609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.099 [2024-10-30 10:52:18.452615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.034 10:52:19 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:58.034 10:52:19 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:23:58.034 10:52:19 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:23:58.034 10:52:19 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:58.034 10:52:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:58.034 10:52:19 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:23:58.034 10:52:19 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:58.034 10:52:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:58.034 10:52:19 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:58.034 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:58.034 ' 00:23:59.937 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:23:59.937 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:23:59.937 10:52:21 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:23:59.937 10:52:21 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:59.937 10:52:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:59.937 10:52:21 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:23:59.937 10:52:21 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:59.937 10:52:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:59.937 10:52:21 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:23:59.937 ' 00:24:00.873 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:24:00.873 10:52:22 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:24:00.873 10:52:22 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:00.873 10:52:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:00.873 10:52:22 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:24:00.873 10:52:22 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.873 10:52:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:00.873 10:52:22 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:24:00.873 10:52:22 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:24:01.441 10:52:22 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:24:01.701 10:52:22 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:24:01.701 10:52:22 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:24:01.701 10:52:22 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:01.701 10:52:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:01.701 10:52:22 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:24:01.701 10:52:22 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.702 10:52:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:01.702 10:52:22 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:24:01.702 ' 00:24:02.637 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:24:02.900 10:52:24 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:24:02.900 10:52:24 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.900 10:52:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:02.900 10:52:24 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:24:02.900 10:52:24 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.900 10:52:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:02.900 10:52:24 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:24:02.900 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:24:02.900 ' 00:24:04.285 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:24:04.285 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:24:04.285 10:52:25 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:24:04.285 10:52:25 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:04.285 10:52:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:04.543 10:52:25 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90353 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90353 ']' 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90353 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90353 00:24:04.543 killing process with pid 90353 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90353' 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 90353 00:24:04.543 10:52:25 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 90353 00:24:07.075 10:52:28 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:24:07.075 10:52:28 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90353 ']' 00:24:07.075 10:52:28 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90353 00:24:07.075 10:52:28 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90353 ']' 00:24:07.075 Process with pid 90353 is not found 00:24:07.075 10:52:28 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90353 00:24:07.075 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (90353) - No such process 00:24:07.075 10:52:28 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 90353 is not found' 00:24:07.075 10:52:28 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:24:07.075 10:52:28 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:07.075 10:52:28 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:07.075 10:52:28 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:07.075 ************************************ 00:24:07.075 END TEST spdkcli_raid 00:24:07.075 ************************************ 00:24:07.075 00:24:07.075 real 0m10.604s 00:24:07.075 user 0m21.850s 00:24:07.075 sys 0m1.250s 00:24:07.075 10:52:28 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:07.075 10:52:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:07.075 10:52:28 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:07.075 10:52:28 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:07.075 10:52:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:07.075 10:52:28 -- common/autotest_common.sh@10 -- # set +x 00:24:07.075 ************************************ 00:24:07.075 START TEST blockdev_raid5f 00:24:07.075 ************************************ 00:24:07.075 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:24:07.075 * Looking for test storage... 00:24:07.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:07.075 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:07.075 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:24:07.075 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:07.335 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.335 10:52:28 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:24:07.335 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.335 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:07.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.335 --rc genhtml_branch_coverage=1 00:24:07.335 --rc genhtml_function_coverage=1 00:24:07.335 --rc genhtml_legend=1 00:24:07.335 --rc geninfo_all_blocks=1 00:24:07.335 --rc geninfo_unexecuted_blocks=1 00:24:07.335 00:24:07.335 ' 00:24:07.335 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:07.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.335 --rc genhtml_branch_coverage=1 00:24:07.335 --rc genhtml_function_coverage=1 00:24:07.335 --rc genhtml_legend=1 00:24:07.335 --rc geninfo_all_blocks=1 00:24:07.335 --rc geninfo_unexecuted_blocks=1 00:24:07.335 00:24:07.335 ' 00:24:07.336 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:07.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.336 --rc genhtml_branch_coverage=1 00:24:07.336 --rc genhtml_function_coverage=1 00:24:07.336 --rc genhtml_legend=1 00:24:07.336 --rc geninfo_all_blocks=1 00:24:07.336 --rc geninfo_unexecuted_blocks=1 00:24:07.336 00:24:07.336 ' 00:24:07.336 10:52:28 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:07.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.336 --rc genhtml_branch_coverage=1 00:24:07.336 --rc genhtml_function_coverage=1 00:24:07.336 --rc genhtml_legend=1 00:24:07.336 --rc geninfo_all_blocks=1 00:24:07.336 --rc geninfo_unexecuted_blocks=1 00:24:07.336 00:24:07.336 ' 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:24:07.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90635 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90635 00:24:07.336 10:52:28 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90635 ']' 00:24:07.336 10:52:28 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.336 10:52:28 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:07.336 10:52:28 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:07.336 10:52:28 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.336 10:52:28 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:07.336 10:52:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:07.336 [2024-10-30 10:52:28.772126] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:24:07.336 [2024-10-30 10:52:28.772566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90635 ] 00:24:07.595 [2024-10-30 10:52:28.962343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.854 [2024-10-30 10:52:29.104772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:08.803 Malloc0 00:24:08.803 Malloc1 00:24:08.803 Malloc2 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:24:08.803 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.803 10:52:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:08.804 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:24:09.062 10:52:30 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.062 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:24:09.062 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:24:09.062 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fb8025d6-c836-418a-b367-48095b234ae0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fb8025d6-c836-418a-b367-48095b234ae0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fb8025d6-c836-418a-b367-48095b234ae0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f3f8e9ea-155f-41f0-8b9b-4d27a6641c10",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7f21b63f-14af-45a0-92e9-b437aa949f2b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ad834174-07a9-431b-a9be-f574973afdb3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:09.062 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:24:09.062 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:24:09.062 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:24:09.062 10:52:30 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90635 00:24:09.062 10:52:30 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90635 ']' 00:24:09.063 10:52:30 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90635 00:24:09.063 10:52:30 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:24:09.063 10:52:30 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:09.063 10:52:30 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90635 00:24:09.063 killing process with pid 90635 00:24:09.063 10:52:30 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:09.063 10:52:30 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:09.063 10:52:30 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90635' 00:24:09.063 10:52:30 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90635 00:24:09.063 10:52:30 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90635 00:24:11.656 10:52:32 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:11.656 10:52:32 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:11.656 10:52:32 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:11.656 10:52:32 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:11.656 10:52:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:11.656 ************************************ 00:24:11.656 START TEST bdev_hello_world 00:24:11.656 ************************************ 00:24:11.656 10:52:32 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:24:11.656 [2024-10-30 10:52:32.970017] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:24:11.656 [2024-10-30 10:52:32.970200] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90702 ] 00:24:11.915 [2024-10-30 10:52:33.152511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.915 [2024-10-30 10:52:33.282455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.483 [2024-10-30 10:52:33.818433] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:12.483 [2024-10-30 10:52:33.818520] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:24:12.483 [2024-10-30 10:52:33.818575] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:12.483 [2024-10-30 10:52:33.819288] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:12.483 [2024-10-30 10:52:33.819461] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:12.483 [2024-10-30 10:52:33.819528] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:12.483 [2024-10-30 10:52:33.819630] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:12.483 00:24:12.483 [2024-10-30 10:52:33.819659] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:13.862 00:24:13.862 real 0m2.216s 00:24:13.862 user 0m1.795s 00:24:13.862 sys 0m0.297s 00:24:13.862 ************************************ 00:24:13.862 END TEST bdev_hello_world 00:24:13.862 ************************************ 00:24:13.862 10:52:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:13.862 10:52:35 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:13.862 10:52:35 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:24:13.862 10:52:35 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:13.862 10:52:35 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:13.862 10:52:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:13.862 ************************************ 00:24:13.862 START TEST bdev_bounds 00:24:13.862 ************************************ 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:24:13.862 Process bdevio pid: 90743 00:24:13.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90743 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90743' 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90743 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90743 ']' 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:13.862 10:52:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:13.862 [2024-10-30 10:52:35.219109] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:24:13.862 [2024-10-30 10:52:35.219293] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90743 ] 00:24:14.121 [2024-10-30 10:52:35.395828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:14.121 [2024-10-30 10:52:35.534631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.121 [2024-10-30 10:52:35.534763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.121 [2024-10-30 10:52:35.534777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.056 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:15.056 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:24:15.056 10:52:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:15.056 I/O targets: 00:24:15.056 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:24:15.056 00:24:15.056 00:24:15.056 CUnit - A unit testing framework for C - Version 2.1-3 00:24:15.056 http://cunit.sourceforge.net/ 00:24:15.056 00:24:15.056 00:24:15.056 Suite: bdevio tests on: raid5f 00:24:15.056 Test: blockdev write read block ...passed 00:24:15.056 Test: blockdev write zeroes read block ...passed 00:24:15.056 Test: blockdev write zeroes read no split ...passed 00:24:15.056 Test: blockdev write zeroes read split ...passed 00:24:15.313 Test: blockdev write zeroes read split partial ...passed 00:24:15.313 Test: blockdev reset ...passed 00:24:15.313 Test: blockdev write read 8 blocks ...passed 00:24:15.313 Test: blockdev write read size > 128k ...passed 00:24:15.313 Test: blockdev write read invalid size ...passed 00:24:15.313 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:15.313 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:15.313 Test: blockdev write read max offset ...passed 00:24:15.313 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:15.313 Test: blockdev writev readv 8 blocks ...passed 00:24:15.313 Test: blockdev writev readv 30 x 1block ...passed 00:24:15.313 Test: blockdev writev readv block ...passed 00:24:15.313 Test: blockdev writev readv size > 128k ...passed 00:24:15.313 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:15.313 Test: blockdev comparev and writev ...passed 00:24:15.313 Test: blockdev nvme passthru rw ...passed 00:24:15.313 Test: blockdev nvme passthru vendor specific ...passed 00:24:15.313 Test: blockdev nvme admin passthru ...passed 00:24:15.313 Test: blockdev copy ...passed 00:24:15.313 00:24:15.313 Run Summary: Type Total Ran Passed Failed Inactive 00:24:15.313 suites 1 1 n/a 0 0 00:24:15.313 tests 23 23 23 0 0 00:24:15.313 asserts 130 130 130 0 n/a 00:24:15.313 00:24:15.313 Elapsed time = 0.536 seconds 00:24:15.313 0 00:24:15.313 10:52:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90743 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90743 ']' 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90743 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90743 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90743' 00:24:15.314 killing process with pid 90743 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90743 00:24:15.314 10:52:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90743 00:24:16.691 10:52:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:24:16.691 ************************************ 00:24:16.692 END TEST bdev_bounds 00:24:16.692 ************************************ 00:24:16.692 00:24:16.692 real 0m2.780s 00:24:16.692 user 0m6.986s 00:24:16.692 sys 0m0.451s 00:24:16.692 10:52:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:16.692 10:52:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:16.692 10:52:37 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:16.692 10:52:37 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:16.692 10:52:37 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:16.692 10:52:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:16.692 ************************************ 00:24:16.692 START TEST bdev_nbd 00:24:16.692 ************************************ 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90804 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90804 /var/tmp/spdk-nbd.sock 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90804 ']' 00:24:16.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:16.692 10:52:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:16.692 [2024-10-30 10:52:38.072224] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:24:16.692 [2024-10-30 10:52:38.072426] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.958 [2024-10-30 10:52:38.260808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.958 [2024-10-30 10:52:38.392003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:17.891 1+0 records in 00:24:17.891 1+0 records out 00:24:17.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035962 s, 11.4 MB/s 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:24:17.891 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:18.458 { 00:24:18.458 "nbd_device": "/dev/nbd0", 00:24:18.458 "bdev_name": "raid5f" 00:24:18.458 } 00:24:18.458 ]' 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:18.458 { 00:24:18.458 "nbd_device": "/dev/nbd0", 00:24:18.458 "bdev_name": "raid5f" 00:24:18.458 } 00:24:18.458 ]' 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:18.458 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:18.717 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:18.718 10:52:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:18.976 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:18.976 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:18.976 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:18.976 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:18.976 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:18.977 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:24:19.235 /dev/nbd0 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.235 1+0 records in 00:24:19.235 1+0 records out 00:24:19.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362522 s, 11.3 MB/s 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:19.235 10:52:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:19.801 { 00:24:19.801 "nbd_device": "/dev/nbd0", 00:24:19.801 "bdev_name": "raid5f" 00:24:19.801 } 00:24:19.801 ]' 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:19.801 { 00:24:19.801 "nbd_device": "/dev/nbd0", 00:24:19.801 "bdev_name": "raid5f" 00:24:19.801 } 00:24:19.801 ]' 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:19.801 256+0 records in 00:24:19.801 256+0 records out 00:24:19.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00806061 s, 130 MB/s 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:19.801 256+0 records in 00:24:19.801 256+0 records out 00:24:19.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0409988 s, 25.6 MB/s 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.801 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:20.060 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:20.319 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:20.319 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:20.319 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:24:20.577 10:52:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:20.836 malloc_lvol_verify 00:24:20.836 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:21.095 1bfba29f-91c7-4d38-9257-6cca2cef265d 00:24:21.095 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:21.354 b61d9b26-4a21-4411-b63d-7c2bb02c4e79 00:24:21.354 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:21.613 /dev/nbd0 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:24:21.613 mke2fs 1.47.0 (5-Feb-2023) 00:24:21.613 Discarding device blocks: 0/4096 done 00:24:21.613 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:21.613 00:24:21.613 Allocating group tables: 0/1 done 00:24:21.613 Writing inode tables: 0/1 done 00:24:21.613 Creating journal (1024 blocks): done 00:24:21.613 Writing superblocks and filesystem accounting information: 0/1 done 00:24:21.613 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:21.613 10:52:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90804 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90804 ']' 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90804 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90804 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:21.872 killing process with pid 90804 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90804' 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90804 00:24:21.872 10:52:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90804 00:24:23.249 10:52:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:24:23.249 00:24:23.249 real 0m6.736s 00:24:23.249 user 0m9.683s 00:24:23.249 sys 0m1.476s 00:24:23.249 ************************************ 00:24:23.249 END TEST bdev_nbd 00:24:23.249 ************************************ 00:24:23.249 10:52:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:23.249 10:52:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:23.508 10:52:44 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:24:23.508 10:52:44 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:24:23.508 10:52:44 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:24:23.508 10:52:44 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:24:23.508 10:52:44 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:23.509 10:52:44 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:23.509 10:52:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 ************************************ 00:24:23.509 START TEST bdev_fio 00:24:23.509 ************************************ 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:24:23.509 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:23.509 ************************************ 00:24:23.509 START TEST bdev_fio_rw_verify 00:24:23.509 ************************************ 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:23.509 10:52:44 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:23.768 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:23.768 fio-3.35 00:24:23.768 Starting 1 thread 00:24:36.021 00:24:36.021 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91019: Wed Oct 30 10:52:56 2024 00:24:36.021 read: IOPS=8243, BW=32.2MiB/s (33.8MB/s)(322MiB/10001msec) 00:24:36.021 slat (usec): min=22, max=205, avg=29.54, stdev= 6.64 00:24:36.021 clat (usec): min=14, max=2714, avg=191.58, stdev=76.87 00:24:36.021 lat (usec): min=42, max=2755, avg=221.12, stdev=78.14 00:24:36.021 clat percentiles (usec): 00:24:36.021 | 50.000th=[ 190], 99.000th=[ 355], 99.900th=[ 506], 99.990th=[ 717], 00:24:36.021 | 99.999th=[ 2704] 00:24:36.021 write: IOPS=8697, BW=34.0MiB/s (35.6MB/s)(335MiB/9870msec); 0 zone resets 00:24:36.021 slat (usec): min=11, max=1112, avg=24.38, stdev= 7.94 00:24:36.021 clat (usec): min=79, max=1578, avg=443.85, stdev=67.86 00:24:36.021 lat (usec): min=101, max=1601, avg=468.23, stdev=70.36 00:24:36.021 clat percentiles (usec): 00:24:36.021 | 50.000th=[ 445], 99.000th=[ 652], 99.900th=[ 775], 99.990th=[ 848], 00:24:36.021 | 99.999th=[ 1582] 00:24:36.021 bw ( KiB/s): min=28376, max=37608, per=98.45%, avg=34253.32, stdev=2500.37, samples=19 00:24:36.021 iops : min= 7094, max= 9402, avg=8563.26, stdev=625.03, samples=19 00:24:36.021 lat (usec) : 20=0.01%, 100=5.46%, 250=31.53%, 500=54.96%, 750=7.95% 00:24:36.021 lat (usec) : 1000=0.08% 00:24:36.021 lat (msec) : 2=0.01%, 4=0.01% 00:24:36.021 cpu : usr=98.49%, sys=0.69%, ctx=25, majf=0, minf=7231 00:24:36.021 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:36.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.021 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:36.021 issued rwts: total=82439,85847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:36.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:36.021 00:24:36.021 Run status group 0 (all jobs): 00:24:36.021 READ: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=322MiB (338MB), run=10001-10001msec 00:24:36.021 WRITE: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=335MiB (352MB), run=9870-9870msec 00:24:36.283 ----------------------------------------------------- 00:24:36.283 Suppressions used: 00:24:36.283 count bytes template 00:24:36.283 1 7 /usr/src/fio/parse.c 00:24:36.283 842 80832 /usr/src/fio/iolog.c 00:24:36.283 1 8 libtcmalloc_minimal.so 00:24:36.283 1 904 libcrypto.so 00:24:36.283 ----------------------------------------------------- 00:24:36.283 00:24:36.283 00:24:36.283 real 0m12.858s 00:24:36.283 user 0m13.239s 00:24:36.283 sys 0m0.910s 00:24:36.283 ************************************ 00:24:36.283 END TEST bdev_fio_rw_verify 00:24:36.283 ************************************ 00:24:36.283 10:52:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:36.283 10:52:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:24:36.283 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fb8025d6-c836-418a-b367-48095b234ae0"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fb8025d6-c836-418a-b367-48095b234ae0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fb8025d6-c836-418a-b367-48095b234ae0",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f3f8e9ea-155f-41f0-8b9b-4d27a6641c10",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7f21b63f-14af-45a0-92e9-b437aa949f2b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ad834174-07a9-431b-a9be-f574973afdb3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:36.544 /home/vagrant/spdk_repo/spdk 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:24:36.544 00:24:36.544 real 0m13.068s 00:24:36.544 user 0m13.331s 00:24:36.544 sys 0m1.001s 00:24:36.544 ************************************ 00:24:36.544 END TEST bdev_fio 00:24:36.544 ************************************ 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:36.544 10:52:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:36.544 10:52:57 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:36.544 10:52:57 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:36.544 10:52:57 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:24:36.544 10:52:57 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:36.544 10:52:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:36.544 ************************************ 00:24:36.544 START TEST bdev_verify 00:24:36.544 ************************************ 00:24:36.544 10:52:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:36.544 [2024-10-30 10:52:57.969326] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:24:36.544 [2024-10-30 10:52:57.969536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91173 ] 00:24:36.804 [2024-10-30 10:52:58.155838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:37.063 [2024-10-30 10:52:58.287717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.063 [2024-10-30 10:52:58.287729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.631 Running I/O for 5 seconds... 00:24:39.504 9940.00 IOPS, 38.83 MiB/s [2024-10-30T10:53:01.910Z] 9599.00 IOPS, 37.50 MiB/s [2024-10-30T10:53:03.290Z] 10731.00 IOPS, 41.92 MiB/s [2024-10-30T10:53:03.858Z] 11551.25 IOPS, 45.12 MiB/s [2024-10-30T10:53:04.137Z] 11975.60 IOPS, 46.78 MiB/s 00:24:42.667 Latency(us) 00:24:42.667 [2024-10-30T10:53:04.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.667 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:42.667 Verification LBA range: start 0x0 length 0x2000 00:24:42.667 raid5f : 5.02 5949.24 23.24 0.00 0.00 32473.53 256.93 35746.91 00:24:42.667 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:42.667 Verification LBA range: start 0x2000 length 0x2000 00:24:42.667 raid5f : 5.02 6007.78 23.47 0.00 0.00 32075.81 138.71 45517.73 00:24:42.667 [2024-10-30T10:53:04.137Z] =================================================================================================================== 00:24:42.667 [2024-10-30T10:53:04.137Z] Total : 11957.02 46.71 0.00 0.00 32273.57 138.71 45517.73 00:24:44.044 00:24:44.044 real 0m7.273s 00:24:44.044 user 0m13.331s 00:24:44.044 sys 0m0.330s 00:24:44.044 10:53:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:44.044 ************************************ 00:24:44.044 END TEST bdev_verify 00:24:44.044 ************************************ 00:24:44.044 10:53:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:24:44.044 10:53:05 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:44.044 10:53:05 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:24:44.044 10:53:05 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:44.044 10:53:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:44.044 ************************************ 00:24:44.044 START TEST bdev_verify_big_io 00:24:44.044 ************************************ 00:24:44.044 10:53:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:44.044 [2024-10-30 10:53:05.294079] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:24:44.044 [2024-10-30 10:53:05.294253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91272 ] 00:24:44.044 [2024-10-30 10:53:05.503289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:44.304 [2024-10-30 10:53:05.636869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.304 [2024-10-30 10:53:05.636872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.871 Running I/O for 5 seconds... 00:24:46.743 506.00 IOPS, 31.62 MiB/s [2024-10-30T10:53:09.593Z] 634.00 IOPS, 39.62 MiB/s [2024-10-30T10:53:10.527Z] 655.00 IOPS, 40.94 MiB/s [2024-10-30T10:53:11.462Z] 634.50 IOPS, 39.66 MiB/s [2024-10-30T10:53:11.721Z] 621.60 IOPS, 38.85 MiB/s 00:24:50.251 Latency(us) 00:24:50.251 [2024-10-30T10:53:11.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.251 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:50.251 Verification LBA range: start 0x0 length 0x200 00:24:50.251 raid5f : 5.33 321.32 20.08 0.00 0.00 9737429.18 268.10 448027.93 00:24:50.251 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:50.251 Verification LBA range: start 0x200 length 0x200 00:24:50.251 raid5f : 5.41 328.25 20.52 0.00 0.00 9679883.95 202.01 461373.44 00:24:50.251 [2024-10-30T10:53:11.721Z] =================================================================================================================== 00:24:50.251 [2024-10-30T10:53:11.721Z] Total : 649.57 40.60 0.00 0.00 9708137.02 202.01 461373.44 00:24:51.629 00:24:51.629 real 0m7.675s 00:24:51.629 user 0m14.083s 00:24:51.629 sys 0m0.330s 00:24:51.629 10:53:12 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:51.629 10:53:12 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:24:51.629 ************************************ 00:24:51.629 END TEST bdev_verify_big_io 00:24:51.629 ************************************ 00:24:51.629 10:53:12 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:51.629 10:53:12 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:24:51.629 10:53:12 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:51.629 10:53:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:51.629 ************************************ 00:24:51.629 START TEST bdev_write_zeroes 00:24:51.629 ************************************ 00:24:51.629 10:53:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:51.629 [2024-10-30 10:53:12.997534] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:24:51.629 [2024-10-30 10:53:12.997672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91370 ] 00:24:51.888 [2024-10-30 10:53:13.173404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.888 [2024-10-30 10:53:13.302490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.463 Running I/O for 1 seconds... 00:24:53.396 18591.00 IOPS, 72.62 MiB/s 00:24:53.396 Latency(us) 00:24:53.396 [2024-10-30T10:53:14.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.396 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:53.396 raid5f : 1.01 18568.76 72.53 0.00 0.00 6866.86 1921.40 16086.11 00:24:53.396 [2024-10-30T10:53:14.866Z] =================================================================================================================== 00:24:53.396 [2024-10-30T10:53:14.866Z] Total : 18568.76 72.53 0.00 0.00 6866.86 1921.40 16086.11 00:24:54.804 00:24:54.804 real 0m3.256s 00:24:54.804 user 0m2.842s 00:24:54.804 sys 0m0.281s 00:24:54.804 10:53:16 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:54.804 10:53:16 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:24:54.804 ************************************ 00:24:54.804 END TEST bdev_write_zeroes 00:24:54.804 ************************************ 00:24:54.804 10:53:16 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:54.804 10:53:16 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:24:54.804 10:53:16 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:54.804 10:53:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:54.804 ************************************ 00:24:54.804 START TEST bdev_json_nonenclosed 00:24:54.804 ************************************ 00:24:54.804 10:53:16 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:55.064 [2024-10-30 10:53:16.329374] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:24:55.064 [2024-10-30 10:53:16.329569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91421 ] 00:24:55.064 [2024-10-30 10:53:16.515759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.322 [2024-10-30 10:53:16.644562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.322 [2024-10-30 10:53:16.644678] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:24:55.322 [2024-10-30 10:53:16.644720] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:55.322 [2024-10-30 10:53:16.644736] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:55.581 00:24:55.581 real 0m0.687s 00:24:55.581 user 0m0.441s 00:24:55.581 sys 0m0.140s 00:24:55.581 10:53:16 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:55.581 10:53:16 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:24:55.581 ************************************ 00:24:55.581 END TEST bdev_json_nonenclosed 00:24:55.581 ************************************ 00:24:55.581 10:53:16 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:55.581 10:53:16 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:24:55.581 10:53:16 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:55.581 10:53:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:55.581 ************************************ 00:24:55.581 START TEST bdev_json_nonarray 00:24:55.581 ************************************ 00:24:55.581 10:53:16 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:55.581 [2024-10-30 10:53:17.043493] Starting SPDK v25.01-pre git sha1 504f4c967 / DPDK 24.03.0 initialization... 00:24:55.581 [2024-10-30 10:53:17.043656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91452 ] 00:24:55.840 [2024-10-30 10:53:17.214451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.098 [2024-10-30 10:53:17.346471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.098 [2024-10-30 10:53:17.346617] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:24:56.098 [2024-10-30 10:53:17.346649] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:56.098 [2024-10-30 10:53:17.346674] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:56.357 00:24:56.357 real 0m0.650s 00:24:56.357 user 0m0.422s 00:24:56.357 sys 0m0.123s 00:24:56.357 10:53:17 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:56.358 10:53:17 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:24:56.358 ************************************ 00:24:56.358 END TEST bdev_json_nonarray 00:24:56.358 ************************************ 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:24:56.358 10:53:17 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:24:56.358 00:24:56.358 real 0m49.232s 00:24:56.358 user 1m7.426s 00:24:56.358 sys 0m5.445s 00:24:56.358 10:53:17 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:56.358 10:53:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:56.358 ************************************ 00:24:56.358 END TEST blockdev_raid5f 00:24:56.358 ************************************ 00:24:56.358 10:53:17 -- spdk/autotest.sh@194 -- # uname -s 00:24:56.358 10:53:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:24:56.358 10:53:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:56.358 10:53:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:56.358 10:53:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@256 -- # timing_exit lib 00:24:56.358 10:53:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:56.358 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:24:56.358 10:53:17 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:56.358 10:53:17 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:24:56.358 10:53:17 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:56.358 10:53:17 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:56.358 10:53:17 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:24:56.358 10:53:17 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:24:56.358 10:53:17 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:24:56.358 10:53:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.358 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:24:56.358 10:53:17 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:24:56.358 10:53:17 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:24:56.358 10:53:17 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:24:56.358 10:53:17 -- common/autotest_common.sh@10 -- # set +x 00:24:58.264 INFO: APP EXITING 00:24:58.264 INFO: killing all VMs 00:24:58.264 INFO: killing vhost app 00:24:58.264 INFO: EXIT DONE 00:24:58.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:58.264 Waiting for block devices as requested 00:24:58.524 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:58.524 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:59.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:59.460 Cleaning 00:24:59.460 Removing: /var/run/dpdk/spdk0/config 00:24:59.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:59.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:59.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:59.460 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:59.460 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:59.460 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:59.460 Removing: /dev/shm/spdk_tgt_trace.pid57052 00:24:59.460 Removing: /var/run/dpdk/spdk0 00:24:59.460 Removing: /var/run/dpdk/spdk_pid56817 00:24:59.460 Removing: /var/run/dpdk/spdk_pid57052 00:24:59.460 Removing: /var/run/dpdk/spdk_pid57281 00:24:59.460 Removing: /var/run/dpdk/spdk_pid57385 00:24:59.460 Removing: /var/run/dpdk/spdk_pid57441 00:24:59.460 Removing: /var/run/dpdk/spdk_pid57569 00:24:59.460 Removing: /var/run/dpdk/spdk_pid57598 00:24:59.460 Removing: /var/run/dpdk/spdk_pid57797 00:24:59.460 Removing: /var/run/dpdk/spdk_pid57914 00:24:59.460 Removing: /var/run/dpdk/spdk_pid58021 00:24:59.460 Removing: /var/run/dpdk/spdk_pid58143 00:24:59.460 Removing: /var/run/dpdk/spdk_pid58251 00:24:59.460 Removing: /var/run/dpdk/spdk_pid58286 00:24:59.460 Removing: /var/run/dpdk/spdk_pid58327 00:24:59.460 Removing: /var/run/dpdk/spdk_pid58403 00:24:59.460 Removing: /var/run/dpdk/spdk_pid58498 00:24:59.460 Removing: /var/run/dpdk/spdk_pid58974 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59044 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59118 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59138 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59288 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59309 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59452 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59473 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59543 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59561 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59625 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59648 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59849 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59880 00:24:59.460 Removing: /var/run/dpdk/spdk_pid59969 00:24:59.460 Removing: /var/run/dpdk/spdk_pid61334 00:24:59.460 Removing: /var/run/dpdk/spdk_pid61550 00:24:59.460 Removing: /var/run/dpdk/spdk_pid61696 00:24:59.460 Removing: /var/run/dpdk/spdk_pid62350 00:24:59.460 Removing: /var/run/dpdk/spdk_pid62566 00:24:59.460 Removing: /var/run/dpdk/spdk_pid62713 00:24:59.460 Removing: /var/run/dpdk/spdk_pid63362 00:24:59.460 Removing: /var/run/dpdk/spdk_pid63697 00:24:59.460 Removing: /var/run/dpdk/spdk_pid63843 00:24:59.460 Removing: /var/run/dpdk/spdk_pid65256 00:24:59.460 Removing: /var/run/dpdk/spdk_pid65514 00:24:59.460 Removing: /var/run/dpdk/spdk_pid65660 00:24:59.460 Removing: /var/run/dpdk/spdk_pid67073 00:24:59.460 Removing: /var/run/dpdk/spdk_pid67326 00:24:59.460 Removing: /var/run/dpdk/spdk_pid67477 00:24:59.460 Removing: /var/run/dpdk/spdk_pid68892 00:24:59.460 Removing: /var/run/dpdk/spdk_pid69343 00:24:59.460 Removing: /var/run/dpdk/spdk_pid69490 00:24:59.460 Removing: /var/run/dpdk/spdk_pid71007 00:24:59.460 Removing: /var/run/dpdk/spdk_pid71268 00:24:59.460 Removing: /var/run/dpdk/spdk_pid71415 00:24:59.460 Removing: /var/run/dpdk/spdk_pid72928 00:24:59.460 Removing: /var/run/dpdk/spdk_pid73198 00:24:59.460 Removing: /var/run/dpdk/spdk_pid73344 00:24:59.460 Removing: /var/run/dpdk/spdk_pid74854 00:24:59.460 Removing: /var/run/dpdk/spdk_pid75352 00:24:59.460 Removing: /var/run/dpdk/spdk_pid75498 00:24:59.460 Removing: /var/run/dpdk/spdk_pid75646 00:24:59.460 Removing: /var/run/dpdk/spdk_pid76093 00:24:59.460 Removing: /var/run/dpdk/spdk_pid76864 00:24:59.460 Removing: /var/run/dpdk/spdk_pid77241 00:24:59.460 Removing: /var/run/dpdk/spdk_pid77941 00:24:59.460 Removing: /var/run/dpdk/spdk_pid78422 00:24:59.460 Removing: /var/run/dpdk/spdk_pid79221 00:24:59.460 Removing: /var/run/dpdk/spdk_pid79641 00:24:59.460 Removing: /var/run/dpdk/spdk_pid81650 00:24:59.460 Removing: /var/run/dpdk/spdk_pid82095 00:24:59.460 Removing: /var/run/dpdk/spdk_pid82547 00:24:59.719 Removing: /var/run/dpdk/spdk_pid84668 00:24:59.719 Removing: /var/run/dpdk/spdk_pid85165 00:24:59.719 Removing: /var/run/dpdk/spdk_pid85678 00:24:59.719 Removing: /var/run/dpdk/spdk_pid86754 00:24:59.719 Removing: /var/run/dpdk/spdk_pid87082 00:24:59.719 Removing: /var/run/dpdk/spdk_pid88049 00:24:59.719 Removing: /var/run/dpdk/spdk_pid88377 00:24:59.719 Removing: /var/run/dpdk/spdk_pid89340 00:24:59.719 Removing: /var/run/dpdk/spdk_pid89670 00:24:59.719 Removing: /var/run/dpdk/spdk_pid90353 00:24:59.719 Removing: /var/run/dpdk/spdk_pid90635 00:24:59.719 Removing: /var/run/dpdk/spdk_pid90702 00:24:59.719 Removing: /var/run/dpdk/spdk_pid90743 00:24:59.719 Removing: /var/run/dpdk/spdk_pid91004 00:24:59.719 Removing: /var/run/dpdk/spdk_pid91173 00:24:59.719 Removing: /var/run/dpdk/spdk_pid91272 00:24:59.719 Removing: /var/run/dpdk/spdk_pid91370 00:24:59.719 Removing: /var/run/dpdk/spdk_pid91421 00:24:59.719 Removing: /var/run/dpdk/spdk_pid91452 00:24:59.719 Clean 00:24:59.719 10:53:21 -- common/autotest_common.sh@1451 -- # return 0 00:24:59.719 10:53:21 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:24:59.719 10:53:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.719 10:53:21 -- common/autotest_common.sh@10 -- # set +x 00:24:59.719 10:53:21 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:24:59.719 10:53:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.719 10:53:21 -- common/autotest_common.sh@10 -- # set +x 00:24:59.719 10:53:21 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:59.719 10:53:21 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:59.719 10:53:21 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:59.719 10:53:21 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:24:59.719 10:53:21 -- spdk/autotest.sh@394 -- # hostname 00:24:59.719 10:53:21 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:59.977 geninfo: WARNING: invalid characters removed from testname! 00:25:26.528 10:53:47 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:30.717 10:53:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:33.246 10:53:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:35.785 10:53:57 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:38.329 10:53:59 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:40.863 10:54:02 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:43.420 10:54:04 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:43.420 10:54:04 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:43.420 10:54:04 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:43.420 10:54:04 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:43.420 10:54:04 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:43.420 10:54:04 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:43.679 + [[ -n 5200 ]] 00:25:43.679 + sudo kill 5200 00:25:43.689 [Pipeline] } 00:25:43.705 [Pipeline] // timeout 00:25:43.711 [Pipeline] } 00:25:43.726 [Pipeline] // stage 00:25:43.731 [Pipeline] } 00:25:43.745 [Pipeline] // catchError 00:25:43.754 [Pipeline] stage 00:25:43.757 [Pipeline] { (Stop VM) 00:25:43.769 [Pipeline] sh 00:25:44.049 + vagrant halt 00:25:48.240 ==> default: Halting domain... 00:25:53.633 [Pipeline] sh 00:25:53.915 + vagrant destroy -f 00:25:58.109 ==> default: Removing domain... 00:25:58.121 [Pipeline] sh 00:25:58.401 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:25:58.408 [Pipeline] } 00:25:58.417 [Pipeline] // stage 00:25:58.421 [Pipeline] } 00:25:58.429 [Pipeline] // dir 00:25:58.432 [Pipeline] } 00:25:58.440 [Pipeline] // wrap 00:25:58.444 [Pipeline] } 00:25:58.455 [Pipeline] // catchError 00:25:58.463 [Pipeline] stage 00:25:58.465 [Pipeline] { (Epilogue) 00:25:58.477 [Pipeline] sh 00:25:58.756 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:05.361 [Pipeline] catchError 00:26:05.363 [Pipeline] { 00:26:05.378 [Pipeline] sh 00:26:05.663 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:05.663 Artifacts sizes are good 00:26:05.672 [Pipeline] } 00:26:05.690 [Pipeline] // catchError 00:26:05.706 [Pipeline] archiveArtifacts 00:26:05.713 Archiving artifacts 00:26:05.814 [Pipeline] cleanWs 00:26:05.826 [WS-CLEANUP] Deleting project workspace... 00:26:05.826 [WS-CLEANUP] Deferred wipeout is used... 00:26:05.832 [WS-CLEANUP] done 00:26:05.834 [Pipeline] } 00:26:05.851 [Pipeline] // stage 00:26:05.856 [Pipeline] } 00:26:05.870 [Pipeline] // node 00:26:05.875 [Pipeline] End of Pipeline 00:26:05.915 Finished: SUCCESS